Ir al contenido

Documat


Analysis and convergence of SMO-like decomposition and geometrical algorithms for support vector machines

  • Autores: Jorge López Lázaro
  • Directores de la Tesis: José Ramón Dorronsoro Ibero (dir. tes.) Árbol académico
  • Lectura: En la Universidad Autónoma de Madrid ( España ) en 2011
  • Idioma: inglés
  • Tribunal Calificador de la Tesis: Gonzalo Martínez-Muñoz (presid.) Árbol académico, Alberto Suárez González (secret.) Árbol académico, Johan A.K. Suykens (voc.) Árbol académico, Manuel Martínez (voc.) Árbol académico, Sancho Salcedo Sanz (voc.) Árbol académico
  • Enlaces
  • Resumen
    • Support Vector Machines (SVMs) constitute one of the most successful paradigms in Machine Learning nowadays. Their success stems from the fact that they are relatively simple models, with excellent generalization properties for classification and regression, that arise from solving convex optimization problems. In addition, the models are interpretable in terms of the so-called Support Vectors, which are the points that influence in the final models.

      These models have the form of a hyperplane, so SVMs are a variety of linear models. Even though linear models are in principle of limited use, the power behind SVMs comes from the use of kernel functions, which effectively build non¿linear models after arriving to a linear model in a projected feature space. The success of SVMs is also indicated by the formulation over the years of numerous variations building on them. Among these, SVMs and Least Squares SVMs (LS-SVMs) have been especially relevant. A parallel line of research investigates the geometrical formulation of SVMs as Nearest Point Problems.

      Although the optimization problems giving rise to SVMs have a simple structure, it is not trivial to solve efficiently these tasks. The main problem comes from the size of the kernel matrix, which is the square of the number of training patterns. This precludes the use of standard optimization routines, and requires the conception of ad-hoc methods. Perhaps the simplest method of all is Sequential Minimal Optimization (SMO). Despite its simplicity, some variations from the original algorithm, termed jointly as "SMO-like" methods, can be considered as the state-of-the-art in SVM training.

      In this thesis, after motivating theoretically SVMs and the SMO algorithm, we formulate a general problem that encompasses all the specific formulations enumerated above. The SMO algorithm can be adapted to this general problem after minor changes, which also includes as particular cases the SMO variants for the different formulations. Moreover, we give a new and simple proof of the convergence of this general SMO version to the optimal solution.


Fundación Dialnet

Mi Documat

Opciones de tesis

Opciones de compartir

Opciones de entorno