, José Antonio Lozano Alonso (codir. tes.) 
This dissertation explores novel methods for improving machine learning models using the Learning Using Privileged Information paradigm in supervised classification scenarios. Firstly, two logistic regression-based methods that integrate privileged features by projecting the full model’s parameters onto a space restricted to regular features are presented. Secondly, a knowledge distillation approach is proposed: a teacher model (trained with both regular and privileged features) transfers knowledge to a student model using only regular features. However, the teacher model may not be entirely reliable or error-free. Therefore, the proposed distillation framework includes a mechanism that guides the student to mimic the teacher when it is correct and to deviate in cases of misclassification. Furthermore, it is supported by a modified cross-entropy loss to properly penalize the interaction between the teacher and the student. Finally, the dissertation proposes a multi-task framework, where one task predicts privileged features from regular ones, and another uses regular and the predicted privileged features to perform the final prediction. Moreover, this framework is also addressed using knowledge distillation techniques. It is important to note that privileged information does not inherently guarantee improved model performance. Consequently, each chapter introduces different approaches designed to maximize the advantages of privileged information and to provide a clearer understanding of its impact on model performance. All methods are validated on various datasets, demonstrating significant improvements over current state-of-the-art techniques. The work contributes both theoretical insights and practical solutions for leveraging additional training-time information in real-world scenarios.
© 2008-2026 Fundación Dialnet · Todos los derechos reservados