Ir al contenido

Documat


Resumen de Nobody writing and nobody reading: artificial intelligence chatbots and the science we want

Ariel Guersenzvaig, Javier Sánchez-Monedero

  • Since their mass introduction in late 2022, AI chatbots – known as Large Language Models (LLMs) – such as ChatGPT and later versions such as GPT-4 have continued to garner press attention. These systems are capable of generating texts, CVs, translations and audio transcripts. Their writing capacity is allegedly so advanced that these systems can generate abstracts so coherently that even experts have been unable to detect that they were written by a machine. Their applied uses are commonplace; it has been suggested, for example, that they could be used to predict the early stages of Alzheimer’s. However, there has also been much insistence that these systems lack the true capacity to understand the texts they process (“reading” or “writing”). For this reason, they have been characterized as “stochastic parrots”. Other problems have also been highlighted, such as the lack of transparency in training data, privacy, biases, or the so-called “hallucinations” and falsehoods that they produce. While the interest is real, nowadays it cannot be stated with certainty that this technology has been widely implemented in formal work processes nor that it has been generalized beyond experimental use or for leisure or the satisfaction of curiosity. This is, without a doubt, a matter that warrants elucidation through serious empirical studies.


Fundación Dialnet

Mi Documat