Ir al contenido

Documat


Interpretability Challenges in Machine Learning Models

  • Gabriel Marín Díaz [1] ; Ramón A. Carrasco González [1] Árbol académico ; Daniel Gómez González [1] Árbol académico
    1. [1] Universidad Complutense de Madrid

      Universidad Complutense de Madrid

      Madrid, España

  • Localización: Moving technology ethics at the forefront of society, organisations and governments / coord. por Jorge Pelegrín Borondo, Mario Arias Oliva Árbol académico, Kiyoshi Murata, Ana María Lara Palma Árbol académico, 2021, ISBN 978-84-09-28672-0, págs. 205-217
  • Idioma: inglés
  • Enlaces
  • Referencias bibliográficas
    • Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). 17. Trends and trajectories for explainable, accountable and...
    • Adversarial Robustness Toolbox. (2021). Adversarial Robustness Toolbox. https://adversarialrobustness-toolbox.org/
    • AI Explanability 360. (2021). AI Explanability 360. https://aix360.mybluemix.net/
    • Bastani, O., Kim, C., & Bastani, H. (2017). 137. Interpreting blackbox models via model extraction. ArXiv.
    • BBC Mundo. (2016). Tay, la robot racista y xenófoba de Microsoft. Bbc. https://www.bbc.com/mundo/noticias/2016/03/160325_tecnologia_microsoft_tay_bot_adolesc...
    • BBC Mundo Tecnología. (2015). Google pide perdón por confundir a una pareja negra con gorilas. Bbc. https://www.bbc.com/mundo/noticias/2015/07/150702_tecnologia_google_perdon_confundir_a...
    • Bert, G. (2018). Google BERT. https://cloud.google.com/tpu/docs/tutorials/bert
    • Blackmer, W. S. (2018). 84. EU general data protection regulation. American Fuel and Petrochemical Manufacturers, AFPM Labor Relations/Human...
    • Britannica, E. (2018). MYCIN. https://www.britannica.com/technology/MYCIN
    • Bundy, A. (2017). 20. Preparing for the future of Artificial Intelligence. Ai & Society, 32(2), 285–287. https://doi.org/10.1007/s00146-016-0685-0
    • Business, C. (2019). Apple co-founder Steve Wozniak says Apple Card discriminated against his wife. https://edition.cnn.com/2019/11/10/business/goldman-sachs-apple-carddiscrimination/index.html
    • Carvalho, D. V., Pereira, E. M., & Cardoso, J. S. (2019). CAT. A Machine learning interpretability: A survey on methods and metrics. Electronics...
    • Casella, G., Fienberg, S., & Olkin, I. (2013). An Introduction to Statistical Learning. In Springer Texts in Statistics. http://books.google.com/books?id=9tv0taI8l6YC
    • Clancey, W. J. (1987). The GUIDON Program. MIT Press Series in Artificial Intelligence.
    • Comisión Europea. (2020). Libro Blanco sobre la Inteligencia Artificial un enfoque europeo orientado a la excelencia y la confianza. Comisión...
    • Commission, E. (2018). Artificial Intelligence for Europe Communication. https://ec.europa.eu/transparency/regdoc/rep/1/2018/EN/COM-2018-237-F1-EN-MAIN-PART-1.PDF
    • Dastin, J. (2005). Amazon scraps secret AI recruiting tool that showed bias against women. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
    • DataRobot. (2021). DataRobot. https://www.datarobot.com/wiki/interpretability/
    • Day, M. (2016). How LinkedIn’s search engine may reflect a gender bias. The Seattle Times. https://www.seattletimes.com/business/microsoft/how-linkedins-search-engine-may-reflect-abias/
    • Digitales, S., Unidos, E., Europa, H., & Digital, P. E. (2020). Los Estados miembros y la Comisión colaborarán para impulsar la inteligencia...
    • Doshi-Velez, F., & Kim, B. (2017). 41. Towards A Rigorous Science of Interpretable Machine Learning. Ml, 1–13. http://arxiv.org/abs/1702.08608
    • Doshi-Velez, F., & Kim, B. (2018). 152. Considerations for Evaluation and Generalization in Interpretable Machine Learning. 3–17. https://doi.org/10.1007/978-3-319-98131-4_1
    • Duval, A. (2019). Explainable Artificial Intelligence ( XAI ) Explainable Artificial. April. https://doi.org/10.13140/RG.2.2.24722.09929
    • European Commission. (2019). COM(2019) 168 final Building Trust in Human Centric Artificial Intelligence. 11. https://ec.europa.eu/digital-single-market/en/news/communication-buildingtrust-human-centric-artificial-intelligence
    • Fast Company. (2019). I applied for an Apple Card. What they offered was a sexist insult. https://www.fastcompany.com/90429224/i-applied-for-an-apple-card-what-they-offered-was-asexist-insult
    • Golovin, D., Solnik, B., Moitra, S., Kochanski, G., Karro, J., & Sculley, D. (2017). 8. Google vizier: A service for black-box optimization....
    • Goodman, B., & Flaxman, S. (2017). 88. European union regulations on algorithmic decision making and a “right to explanation.” AI Magazine,...
    • Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G. Z. (2019). 18. XAI-Explainable artificial intelligence. Science...
    • H2O.ai. (2020). H2O Driverless AI. https://www.h2o.ai/products/h2o-driverless-ai/
    • Hand, D., & Paulos, J. A. (1992). Innumeracy: Mathematical Illiteracy and its Consequences. In Applied Statistics (Vol. 41, Issue 1)....
    • Honegger, M. R. (2018). 79. Shedding Light on Black Box Machine Learning Algorithms. August.
    • Hughes, R., Edmond, C., Wells, L., Glencross, M., Zhu, L., & Bednarz, T. (2020). eXplainable AI (XAI). 1– 62. https://doi.org/10.1145/3415263.3419166
    • Kahneman, D. (1981). The Simulation Heuristic.
    • Kahneman, D. (2012). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux, 2011. In Etc (Issue October).
    • Kahng, M., Andrews, P. Y., Kalro, A., & Chau, D. H. P. (2018). 39. ActiVis: Visual Exploration of IndustryScale Deep Neural Network Models....
    • Larson, J., Mattu, S., Kirchner, L., & Angwin, J. (2016). How We Analyzed the COMPAS Recidivism Algorithm. ProPublica. https://www.propublica.org/article/how-we-analyzed-the-compasrecidivism-algorithm
    • Lipton, P. (1990). Contrastive explanation. Contrastivism in Philosophy, 11–34. https://doi.org/10.4324/9780203117477
    • Lipton, Z. C. (2018). The mythos of model interpretability. Communications of the ACM, 61(10), 35–43. https://doi.org/10.1145/3233231
    • Liu, H., Cocea, M., & Gegov, A. (2016). Interpretability of computational models for sentiment analysis. Studies in Computational Intelligence,...
    • Microsoft. (2021). Instalar el SDK de Azure Machine Learning para Python. https://docs.microsoft.com/es-es/python/api/overview/azure/ml/install?preserveview=true&view=azure-ml-py
    • Miller, T. (2019). 95. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007
    • Molnar, C. (2019). Interpretable Machine Learning. A Guide for Making Black Box Models Explainable. Book, 247. https://christophm.github.io/interpretable-ml-book
    • Munchen, T. U. (2021). European approach to Artificial Intelligence. E-Conversion Proposal for a Cluster of Excellence, 29–50. https://ec.europa.eu/digital-single-market/en/news/communicationbuilding-trust-human-centric-artificial-intelligence
    • Nickerson, R. S. (1998). Confirmation Bias: A Ubiquitous Phenomenon in Many Guises. Zeitschrift Für Neurologie, 199(1–2), 145–150. https://doi.org/10.1007/BF00316552
    • Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Model-Agnostic Interpretability of Machine Learning. Whi. http://arxiv.org/abs/1606.05386
    • Ross, C. (2018). Watson for Oncology. STAT, 1–30. papers3://publication/uuid/5566F158-417A-46D3- B583-04EE273812A1
    • Roy, M. (2017). 80. Cathy O’Neil. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown...
    • Royal Society of Great Britain. (2017). 24. Machine learning : the power and promise of computers that learn by example. In Report by the...
    • Rudin, C. (2019). 9. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature...
    • Samad, M. D., Ulloa, A., Wehner, G. J., Jing, L., Hartzel, D., Good, C. W., Williams, B. A., Haggerty, C. M., & Fornwalt, B. K. (2019)....
    • Standardization, I. O. (2021). ISO. International Organization for Standardization. https://www.iso.org/committee/6794475.html
    • Sundararajan, M., Taly, A., & Yan, Q. (2017). Axiomatic attribution for deep networks. ArXiv.
    • Tan, S., Caruana, R., Hooker, G., & Lou, Y. (2018). 77. Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation....
    • UE. (2016). Artículo 22 UE RGDP. https://www.privacy-regulation.eu/es/22.htm
    • UNE. (2021). UNE Normalización Española. https://www.une.org/encuentra-tu-norma/comitestecnicos-de-normalizacion/comite/?c=CTN 71/SC 42
    • Weller, A. (2019). 85. Transparency: Motivations and Challenges. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial...
    • Zhang, J., Wang, Y., Molino, P., Li, L., & Ebert, D. S. (2018). 40. Manifold: A model-agnostic framework for interpretation and diagnosis...

Fundación Dialnet

Mi Documat

Opciones de artículo

Opciones de compartir

Opciones de entorno