Ir al contenido

Documat


Towards more interpretable graphs and Knowledge Graph algorithms

  • Autores: Unai Zulaika Zurimendi
  • Directores de la Tesis: Diego López de Ipiña González de Artaza (dir. tes.) Árbol académico, Aitor Almeida (dir. tes.) Árbol académico
  • Lectura: En la Universidad de Deusto ( España ) en 2022
  • Idioma: inglés
  • Tribunal Calificador de la Tesis: Humberto Bustince Sola (presid.) Árbol académico, Aritz Bilbao Jayo (secret.) Árbol académico, Gorka Azkune Galparsoro (voc.) Árbol académico
  • Enlaces
    • Tesis en acceso abierto en: TESEO
  • Resumen
    • The increase in the amount of data generated by today’s technologies has led to the creation of large graphs and Knowledge Graphs that contain millions of facts about people, things and places in the world. Grounded on those large data stores, many Machine Learning models have been proposed to achieve different tasks, such as predicting new links or weights. Nevertheless, one of the main challenges of those models is their lack of interpretability. Commonly known as “black boxes”, Machine Learning models are usually not understandable to humans. This lack of interpretability becomes even a more severe problem for Knowledge graph-related applications, including healthcare systems, chatbots, or public service management tools where end-users require an understanding of the feedback given by the models.

      In this thesis, we present methods to increase the interpretability of graphs and Knowledge Graphs based Machine Learning models. We follow a taxonomy grounded on the output result obtained by the proposed methods. Each of the different methods is suitable for particular use cases and scenarios, and can help end-users in different manners. Precisely, we provide an interpretable link weight prediction method based on the Weisfeiler-Lehman graph colouring technique. Additionally, we present an adaption of the Regularized Dual Averaging optimization method for Knowledge Graphs to obtain interpretable representations in link prediction models. Lastly, we introduce the use of Influence Functions for Knowledge Graph link prediction models to acquire the most im- important training facts for a given prediction. Through experiments in link weight prediction and link prediction, we show that our methods can successfully increase the interpretability of the machine learning models of graphs and Knowledge Graphs while maintaining competition with state-of-the-art methods in terms of performance.


Fundación Dialnet

Mi Documat

Opciones de tesis

Opciones de compartir

Opciones de entorno