Ir al contenido

Documat


Aprovechamiento de ChatGPT en la enseñanza de lengua extranjera en educación superior

  • Ribes Lafoz, María [1] ; Navarro Colorado, Borja [1] Árbol académico
    1. [1] Universitat d'Alacant

      Universitat d'Alacant

      Alicante, España

  • Localización: Educación y sociedad: claves interdisciplinares / coord. por Delfín Ortega Sánchez Árbol académico, Alexander López Padrón, 2023, ISBN 978-84-10054-35-6, págs. 1264-1271
  • Idioma: español
  • Enlaces
  • Resumen
    • The last decade has witnessed a remarkable advancement in the field of Artificial Intelligence, particularly in Natural Language Processing (NLP) and Natural Language Generation (NLG), which has been made possible by the development and enhancement of Deep Learning techniques and Large Language Models (LLMs). Nowadays, ChatGPT is one of the most widely known and used, an advanced AI tool designed to understand and generate human-like text based on a vast amount of data. In this work we put forward an innovative proposal for higher education which involves the use of ChatGPT-3 as a supervised tool in the classroom which will be mostly employed by students and not by teachers, in order to foster their linguistic skills by means of transforming the automatically generated written text into an oral presentation to be delivered in front of the class. Our ultimate goal is not to determine whether the student has made fraudulent use of GPT, but rather to leverage all the advantages that it can offer to improve the teaching and learning process of a foreign language within the CLIL framework.

  • Referencias bibliográficas
    • Almagro Gorbea, M., López Rosendo, E., Mederos Martín, A. y Torres Ortiz, M. (2010). Los sarcófagos antropoides de la necrópolis de Cádiz....
    • Bachman, L. F.y Palmer, A. S. (2022). Language assessment in practice: Developing language assessments and justifying their use in the real...
    • Bender, E. M., Gebru, T., McMillan-Major, A. y Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?...
    • Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S.,...
    • Consejo de Europa. (2020). Marco común europeo de referencia para las lenguas: Aprendizaje, enseñanza, evaluación. (Vol. complementario)....
    • Council of Europe. (2020). Common European framework of reference for languages: Learning, teaching, assessment; companion volume. Council...
    • Coyle, D., Hood, P. y Marsh, D. (2010). CLIL: Content and language integrated learning. Cambridge University Press.
    • Devlin, J., Chang, M.-W., Lee, K. y Toutanova, K. (2018). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding....
    • Eco, U. (1984). Apocalípticos e integrados (7ª ed.). Lumen.
    • European Commission. Joint Research Centre. (2017). European framework for the digital competence of educators: DigCompEdu. Publications Office....
    • Jurafsky, D. y Martin, J. H. (2023). Speech and Language Processing. https://web.stanford.edu/~jurafsky/slp3/
    • Lu, K., Yang, H. H., Shi, Y. y Wang, X. (2021). Examining the key influencing factors on college students’ higher-order thinking skills in...
    • Meyliana, Sablan, B., Surjandy y Hidayanto, A. N. (2021). Flipped learning effect on classroom engagement and outcomes in university information...
    • OCDE. (2021). Marco de Evaluación de Lengua Extranjera PISA 2025. PISA. OECD Publishing.
    • OpenAI. (2023). GPT-4 Technical Report. Cornell University, arXiv:2303.08774. https://doi.org/10.48550/ARXIV.2303.08774
    • Ramesh, A., Pavlov, M. y Goh, G. (2021). DALL·E: Creating images from text. https://openai.com/research/dall-e
    • Rombach, R., Blattmann, A., Lorenz, D., Esser, P. y Ommer, B. (2021). High-Resolution Image Synthesis with Latent Diffusion Models. Cornell...
    • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. y Polosukhin, I. (2017). Attention Is All You Need....
    • Wang, H., Li, J., Wu, H., Hovy, E. y Sun, Y. (2022). Pre-Trained Language Models and Their Applications. Engineering, https://doi.org/10.1016/j.eng.2022.04.024
    • Yang, J., Pavone, M. y Wang, Y. (2023). FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency Regularization. Cornell University,...

Fundación Dialnet

Mi Documat

Opciones de artículo

Opciones de compartir

Opciones de entorno