Firenze, Italia
Classifcation trees are one of the most common models in interpretable machine learning. Although such models are usually built with greedy strategies, in recent years, thanks to remarkable advances in mixed-integer programming (MIP) solvers, several exact formulations of the learning problem have been developed. In this paper, we argue that some of the most relevant ones among these training models can be encapsulated within a general framework, whose instances are shaped by the specifcation of loss functions and regularizers. Next, we introduce a novel realization of this framework: specifcally, we consider the logistic loss, handled in the MIP setting by a piece-wise linear approximation, and couple it with l1-regularization terms. The resulting optimal logistic classifcation tree model numerically proves to be able to induce trees with enhanced interpretability properties and competitive generalization capabilities, compared to the state-of-the-art MIP-based approaches.
© 2008-2024 Fundación Dialnet · Todos los derechos reservados