Argumentative Conversational Agents for Explainable Artificial Intelligence
Por favor, use este identificador para citas ou ligazóns a este ítem:
http://hdl.handle.net/10347/31084
Ficheiros no ítem
Metadatos do ítem
Título: | Argumentative Conversational Agents for Explainable Artificial Intelligence |
Autor/a: | Stepin, Ilia |
Dirección/Titoría: | Alonso Moral, José María Catalá Bolos, Alejandro |
Centro/Departamento: | Universidade de Santiago de Compostela. Escola de Doutoramento Internacional (EDIUS) Universidade de Santiago de Compostela. Programa de Doutoramento en Investigación en Tecnoloxías da Información |
Palabras chave: | explainable artificial intelligence | counterfactuals | dialogue game | interpretable fuzzy modelling | human evaluation | |
Data: | 2023 |
Resumo: | Recent years have witnessed a striking rise of artificial intelligence algorithms that are able to show outstanding performance. However, such good performance is oftentimes achieved at the expense of explainability. Not only can the lack of algorithmic explainability undermine the user's trust in the algorithmic output, but it can also cause adverse consequences. In this thesis, we advocate the use of interpretable rule-based models that can serve both as stand-alone applications and proxies for black-box models. More specifically, we design an explanation generation framework that outputs contrastive, selected, and social explanations for interpretable (decision trees and rule-based) classifiers. We show that the resulting explanations enhance the effectiveness of AI algorithms while preserving their transparent structure. |
URI: | http://hdl.handle.net/10347/31084 |
Dereitos: | Attribution-NonCommercial-NoDerivatives 4.0 Internacional |
Coleccións
O ítem ten asociados os seguintes ficheiros de licenza: