Ir al contenido

Documat


Attesting Digital Discrimination Using Norms

  • Natalia Criado [1] Árbol académico ; Xavier Ferrer [1] ; Jose M. Such [1]
    1. [1] King's College London

      King's College London

      Reino Unido

  • Localización: IJIMAI, ISSN-e 1989-1660, Vol. 6, Nº. 5, 2021, págs. 16-23
  • Idioma: inglés
  • DOI: 10.9781/ijimai.2021.02.008
  • Enlaces
  • Resumen
    • More and more decisions are delegated to Machine Learning (ML) and automatic decision systems recently. Despite initial misconceptions considering these systems unbiased and fair, recent cases such as racist algorithms being used to inform parole decisions in the US, low-income neighborhood's targeted with high-interest loans and low credit scores, and women being undervalued by online marketing, fueled public distrust in machine learning. This poses a significant challenge to the adoption of ML by companies or public sector organisations, despite ML having the potential to lead to significant reductions in cost and more efficient decisions, and is motivating research in the area of algorithmic fairness and fair ML. Much of that research is aimed at providing detailed statistics, metrics and algorithms which are difficult to interpret and use by someone without technical skills. This paper tries to bridge the gap between lay users and fairness metrics by using simpler notions and concepts to represent and reason about digital discrimination. In particular, we use norms as an abstraction to communicate situations that may lead to algorithms committing discrimination. In particular, we formalise non-discrimination norms in the context of ML systems and propose an algorithm to attest whether ML systems violate these norms.

  • Referencias bibliográficas
    • N. Criado, J. Such, “Digital discrimination,” in Algorithmic Regulation, Oxford University Press, 2019.
    • C. O’neil, Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books, 2016.
    • A. Altman, “Discrimination,” 2011.
    • M. B. Zafar, I. Valera, M. Gomez Rodriguez, K. P. Gummadi, “Fairness beyond disparate treatment & disparate impact: Learning classification...
    • S. Verma, J. Rubin, “Fairness definitions explained,” in 2018 IEEE/ACM International Workshop on Software Fairness (FairWare), 2018, pp. 1–7,...
    • S. Barocas, A. Selbst, “Big Data’s Disparate Impact,” California law review, vol. 104, no. 1, pp. 671–729, 2016, doi: http://dx.doi.org/10.15779/...
    • C. Cook, R. Diamond, J. Hall, J. A. List, P. Oyer, “The gender earnings gap in the gig economy: Evidence from over a million rideshare drivers,”...
    • S. Hajian, J. Domingo-Ferrer, “A methodology for direct and indirect discrimination prevention in data mining,” IEEE transactions on knowledge...
    • F. Calmon, D. Wei, B. Vinzamuri, K. N. Ramamurthy, K. R. Varshney, “Optimized preprocessing for discrimination prevention,” in Advances in...
    • F. Kamiran, T. Calders, “Data preprocessing techniques for classification without discrimination,” Knowledge and Information Systems, vol....
    • M. Feldman, S. A. Friedler, J. Moeller, C. Scheidegger, S. Venkatasubramanian, “Certifying and removing disparate impact,” in proceedings...
    • T. M. Cover, J. A. Thomas, Elements of information theory. John Wiley & Sons, 2012.
    • W. G. Cochran, “The χ2 test of goodness of fit,” The Annals of Mathematical Statistics, pp. 315–345, 1952.
    • F. Freese, Elementary statistical methods for foresters. No. 317, US Department of Agriculture, 1967.
    • R. K. Bellamy, K. Dey, M. Hind, S. C. Hoffman, S. Houde, K. Kannan, P. Lohia, J. Martino, S. Mehta, A. Mojsilović, et al., “Ai fairness 360:...
    • A. Chouldechova, “Fair prediction with disparate impact: A study of bias in recidivism prediction instruments,” Big data, vol. 5, no. 2, pp....
    • J. Angwin, J. Larson, S. Mattu, L. Kirchner, “Machine bias risk assessments in criminal sentencing,” ProPublica, May, vol. 23, 2016.
    • X. Ferrer, T. van Nuenen, J. M. Such, M. Coté, N. Criado, “Bias and Discrimination in AI: a cross-disciplinary perspective,” IEEE Technology...
    • D. Pedreschi, S. Ruggieri, F. Turini, “Integrating induction and deduction for finding evidence of discrimination,” in Proceedings of the...
    • F. Tramer, V. Atlidakis, R. Geambasu, D. Hsu, J. P. Hubaux, M. Humbert, A. Juels, H. Lin, “Fairtest: Discovering unwarranted associations...
    • I. Zliobaite, “A survey on measuring indirect discrimination in machine learning,” arXiv preprint arXiv:1511.00148, 2015.
    • A. Datta, S. Sen, Y. Zick, “Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems,” in 2016...
    • T. van Nuenen, X. Ferrer, J. M. Such, M. Cote, “Transparency for whom? assessing discriminatory artificial intelligence,” Computer, vol. 53,...
    • A. Caliskan, J. J. Bryson, A. Narayanan, “Semantics derived automatically from language corpora contain human-like biases,” Science, vol....
    • N. Garg, L. Schiebinger, D. Jurafsky, J. Zou, “Word embeddings quantify 100 years of gender and ethnic stereotypes,” PNAS 2018, vol. 115,...
    • X. Ferrer, T. van Nuenen, J. M. Such, N. Criado, “Discovering and categorising language biases in reddit,” in International AAAI Conference...
    • C. Dwork, M. Hardt, T. Pitassi, O. Reingold, R. Zemel, “Fairness through awareness,” in ITCS 2012, 2012, pp. 214–226, ACM.
    • N. Grgić-Hlača, M. Zafar, K. Gummadi, A. Weller, “Beyond Distributive Fairness in Algorithmic Decision Making,” AAAI, pp. 51–60, 2018, doi:...
    • N. Kilbertus, M. Carulla, G. Parascandolo, M. Hardt, D. Janzing, B. Schölkopf, “Avoiding discrimination through causal reasoning,” in NIPS’17,...
    • S. T. Mueller, R. R. Hoffman, W. Clancey, A. Emrey, G. Klein Macrocognition, “DARPA XAI Literature Review p. Explanation in Human-AI Systems:...

Fundación Dialnet

Mi Documat

Opciones de artículo

Opciones de compartir

Opciones de entorno