In this paper, an entirely new procedure for the classification of high-dimensional vectors on the basis of a few training samples is described. The proposed method is based on the Bayesian paradigm and provides posterior probabilities that a new vector belongs to each of the classes, therefore it adapts naturally to any number of classes. The classification technique is based on a small vector which can be viewed as a regression of the new observation onto the space spanned by the training samples, which is similar to Support Vector Machine classification paradigm. This is achieved by employing matrix-variate distributions in classification, which is an entirely new idea.
© 2008-2024 Fundación Dialnet · Todos los derechos reservados