Ramón Alberto Mollineda Cardenas , J. S. Sanchez , José Martínez Sotoca
It is widely accepted that the empirical behavior of classifiers strongly depends on available data. For a given problem, it is rather difficult to guess which classifier will provide the best performance or to set a proper expectation on classification performance. Traditional experimental studies consist of presenting accuracy of a set of classifiers on a small number of problems, without analyzing why a classifier outperforms other classification algorithms. Recently, some researchers have tried to characterize data complexity and relate it to classifier performance. In this paper, we present a general meta-learning framework based on a number of data complexity measures. We also discuss the applicability of this method to several problems in pattern analysis.
© 2008-2024 Fundación Dialnet · Todos los derechos reservados