Ir al contenido

Documat


Fair and Interpretable Mathematical Methods for Prediction Models

  • Autores: Rafael Jiménez Llamas
  • Directores de la Tesis: Emilio Carrizosa Priego (dir. tes.) Árbol académico, Josefa Ramírez Cobo (dir. tes.) Árbol académico
  • Lectura: En la Universidad de Sevilla ( España ) en 2025
  • Idioma: inglés
  • Número de páginas: 127
  • Enlaces
    • Tesis en acceso abierto en: Idus
  • Resumen
    • Machine learning algorithms have expanded in recent years to encompass areas of high impact on people’s lives, such as health, finance, education, and justice. In these areas, there is a need to make decisions in a transparent, interpretable way and without bias or discrimination. This thesis addresses the fairness, interpretability, and accuracy of different models from a Bayesian perspective, taking advantage of this framework to quantify uncertainty in predictions as well as in the different fairness metrics implemented, so that trade-offs between objectives can be assessed in a robust way.

      The main contributions are three. 1) For linear regression, an empirical Bayes–based approach is introduced in which, after choosing an appropriate unfairness metric between sensitive and non-sensitive groups, the evidence is maximized subject to a certain constraint on the hyperparameters to bound the unfairness metric. In this way, we have explicit control over the level of fairness while maximizing the model’s accuracy. 2) For logistic regression, we develop a variant of mean-field Bayesian variational inference in which we modify the optimization term with a penalty that depends on the expected unfairness, allowing us again to obtain a trade-off between accuracy and fairness. 3) Finally, the approach of the second contribution is modified with a new prior distribution structure, in which control over the model’s sparsity is added. In this way, we have control over the model’s sparsity, fairness through a modified penalty similar to that in 2), and accuracy, allowing us to find the relationships among the three parameters of interest.

      In summary, the thesis expands the state of the art in Bayesian fairness, showing that fair and interpretable models can be created and, thanks to variational inference, also scale to modern datasets while allowing the quantification of uncertainty for the parameters of interest, adding an extra layer of usefulness for fair and responsible decision-making.


Fundación Dialnet

Mi Documat

Opciones de tesis

Opciones de compartir

Opciones de entorno