Ir al contenido

Documat


Ai hallucinations? what about human hallucination?!: Addressing human imperfection is needed for an ethical AI

  • Tlili, Ahmed [1] ; Burgos, Daniel [2]
    1. [1] Beijing Normal University

      Beijing Normal University

      China

    2. [2] Universidad Internacional de La Rioja

      Universidad Internacional de La Rioja

      Logroño, España

  • Localización: IJIMAI, ISSN-e 1989-1660, Vol. 9, Nº. 2, 2025, págs. 68-71
  • Idioma: inglés
  • DOI: 10.9781/ijimai.2025.02.010
  • Enlaces
  • Resumen
    • This study discusses how the human imperfection nature, also known as the human hallucination, could contribute to or emphasize technology (generally) and Artificial Intelligence (AI, particularly) hallucination. While the ongoing debate puts more efforts on improving AI for its ethical use, a shift should be made to also cover us, humans, who are the technology designer, developer, and user. Identifying and understanding the link between human and AI hallucination will ultimately help to develop effective and safe AI-powered systems that could have some positive societal impact in the long run.

  • Referencias bibliográficas
    • T. Hagendorff and S. Fabi, “Why we need biased AI: How including cognitive biases can enhance AI systems,” Journal of Experimental & Theoretical...
    • D. Dzhuhalyk, “Character.AI chatbot is accused of driving a teenager to suicide,” Available: https://mezha.media/en/2024/10/24/character-ai- chatbot-is-accused-of-driving-a-teenager-to-suicide/
    • A. Clark and M. Mahtani, “Google AI chatbot responds with a threatening message: “Human … Please die.”,” Available: https://www.cbsnews.com/ news/google-ai-chatbot-threatening-message-human-please-die/
    • N. Maleki, B. Padmanabhan, and K. Dutta, “AI hallucinations: a misnomer worth clarifying,” in 2024 IEEE Conference on Artificial Intelligence...
    • E. Sengupta, D. Garg, T. Choudhury, and A. Aggarwal, “Techniques to eliminate human bias in machine learning,” in 2018 International Conference...
    • H. Ibrahim, F. Liu, Y. Zaki, and T. Rahwan, “Google Scholar is manipulatable,” 2024, arXiv preprint arXiv:2402.04607.
    • Y. Xu, M. Wang, K. Moty, and M. Rhodes, “How culture shapes the early development of essentialist beliefs,” Developmental Science, vol. 28,...
    • S. J. Watkins and C. Musselwhite, “Recognised cognitive biases: How far do they explain transport behaviour?,” Journal of Transport &...
    • G. D. Baxter, E. F. Churchill, and F. E. Ritter, “Addressing the fundamental attribution error of design using the ABCS,” AIS SIGCHI Newsletter,...
    • L. Ross, T. M. Amabile, and J. L. Steinmetz, “Social roles, social control, and biases in social-perception processes,” Journal of Personality...
    • T. Bolukbasi, K. W. Chang, J. Y. Zou, V. Saligrama, and A. T. Kalai, “Man is to computer programmer as woman is to homemaker? Debiasing word embeddings,”...
    • J. Zhao, T. Wang, M. Yatskar, V. Ordonez, and K. W. Chang, “Men also like shopping: Reducing gender bias amplification using corpus-level constraints,”...
    • L. A. Hendricks, K. Burns, K. Saenko, T. Darrell, and A. Rohrbach, “Women also snowboard: Overcoming bias in captioning models,” in Proceedings...
    • J, Guynn, “‘You’re the ultimate editor,’ Twitter’s Jack Dorsey and Facebook’s Mark Zuckerberg accused of censoring conservatives.” Available...
    • G. Pennycook, A. Bear, E. T. Collins, and D. G. Rand, “The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived...
    • C. O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown, 2017.
    • S. L. Blodgett, S. Barocas, H. Daumé III, and H. Wallach, “Language (technology) is power: A critical survey of ‘bias’ in NLP,” arXiv preprint arXiv:2005.14050,...
    • CTOL, “NeurIPS 2024 Sparks Controversy: MIT Professor’s Remarks Ignite “Racism” Backlash Amid Chinese Researchers’ Triumphs.” Available on:...
    • A. Bozkurt, J. Xiao, R. Farrow, J. Y. Bai, C. Nerantzi, S. Moore, and T. I. Asino (Eds.), “The manifesto for teaching and learning in a time...
    • J. Buolamwini and T. Gebru, “Gender shades: Intersectional accuracy disparities in commercial gender classification,” in Conference on Fairness, Accountability...
    • J. M. Carroll, “Human-computer interaction: psychology as a science of design,” International Journal of Human-Computer Studies, vol. 46,...
    • C. Lewis and J. Rieman, Task-Centered User Interface Design: A Practical Introduction, 1993. Published as shareware. Available from: https:// hcibib.org/tcuid/tcuid.pdf.
    • B. Friedman and H. Nissenbaum, “Bias in computer systems,” ACM Transactions on Information Systems, vol. 14, no. 3, pp. 330-347, 1996.
    • C. O. Morningstar and F. R. Farmer, “The lessons of Lucasfilm’s Habitat,” in B. Michael, Ed., Cyberspace: The First Steps. Cambridge, MA:...

Fundación Dialnet

Mi Documat

Opciones de artículo

Opciones de compartir

Opciones de entorno