Ir al contenido

Documat


Efficient optimization methods for regularized learning support vector machines and total-variation regularization

  • Autores: Álvaro Barbero Jiménez
  • Directores de la Tesis: José Ramón Dorronsoro Ibero (dir. tes.) Árbol académico
  • Lectura: En la Universidad Autónoma de Madrid ( España ) en 2011
  • Idioma: inglés
  • Tribunal Calificador de la Tesis: Ramón Figueras (presid.) Árbol académico, Carlos Santa Cruz Fernández (secret.) Árbol académico, Antonio Artés Rodríguez (voc.) Árbol académico, Suvrit Sra (voc.) Árbol académico, David Ríos Insua (voc.) Árbol académico
  • Enlaces
  • Resumen
    • In the context of machine learning methods, regularization has become an established practice to control overfitting in the modeling process and to induce structure into the resultant models. At the same time, the exibility of the regularization framework has provided a common point of view embracing classical and established learning models, as well as recent proposals in the topic. This richness comes from its appealing simplicity, which casts the learning process into a composite optimization problem formed by a loss function and a regularizer; diferent models are obtained through the selection of appropriate loss and regularizer functions.

      This elegant modularity, however, does not come at no cost, as an adequate optimization algorithm must be applied or devised in order to solve the resultant problem. While general purpose solvers are directly applicable out{of{the{box in some settings, they usually produce poor results in terms of eficiency and scalability. Further, in more complex models featuring non-smooth or even non-convex loss or regularizer functions, such approaches easily become inapplicable. Consequently, the design of appropriate optimization methods becomes a key task for the success of a regularized learning process.

      In this thesis two particular cases of regularization are studied in depth. On the one hand, the well established and successful Support Vector Machine model is presented in its diferent forms. A careful observation at the current algorithmic solutions to this problem shows that correcting hidden deficiencies and making a better use of the gathered information can lead to significant improvements in running times, surpassing state of the art methods. On the other hand, a class of sparsity-inducing regularizers known as Total{Variation is studied, with wide application in the fields of signal and image processing. While a variety of approaches have been applied to solve this class of problems, it is shown here that by taking advantage of their strong structural properties and adapting suitable optimization algorithms, relevant improvements in eficiency and scalability can be obtained as well. Software implementing the developed methods is also made available as part of this thesis.


Fundación Dialnet

Mi Documat

Opciones de tesis

Opciones de compartir

Opciones de entorno