Madrid, España
Multi-task learning (MTL) is a powerful framework that allows to take advantage of the similarities between several machine learning tasks to improve on their solution by independent task specific models. Support Vector Machines (SVMs) are well suited for this and Cai et al. have proposed additive MTL SVMs, where the final model corresponds to the sum of a common one shared between all tasks, and each task specific model. In this work we will propose a different formulation of this additive approach, in which the final model is a convex combination of common and task specific ones. The convex mixing hyper-parameter λ takes values between 0 and 1, where a value of 1 is mathematically equivalent to a common model for all the tasks, whereas a value of 0 corresponds to independent task-specific models. We will show that for λ values between 0 and 1, this convex approach is equivalent to the additive one of Cai et al. when the other SVM parameters are properly selected. On the other hand, the predictions of the proposed convex model are also convex combinations of the common and specific predictions, making this formulation easier to interpret. Finally, this convex formulation is easier to hyper-parametrize since the hyper-parameter λ is constrained to the [0, 1] region, in contrast with the unbounded range in the additive MTL SVMs.
© 2008-2024 Fundación Dialnet · Todos los derechos reservados