Ir al contenido

Documat


Regularized model learning in EDAs for continuous and multi-objective optimization

  • Autores: Hossein Karshenas Najafabadi
  • Directores de la Tesis: Pedro Larrañaga Múgica (dir. tes.) Árbol académico, Concha Bielza Lozoya (dir. tes.) Árbol académico
  • Lectura: En la Universidad Politécnica de Madrid ( España ) en 2013
  • Idioma: inglés
  • Tribunal Calificador de la Tesis: José Antonio Lozano Alonso (presid.) Árbol académico, Ruben Armañanzas Arnedillo (secret.) Árbol académico, Roberto Santana Hermida (voc.) Árbol académico, Jesús García Herrero (voc.) Árbol académico, Fernando Lobo (voc.) Árbol académico
  • Enlaces
  • Resumen
    • Probabilistic modeling is the de?ning characteristic of estimation of distribution algorithms (EDAs) which determines their behavior and performance in optimization. Regularization is a well-known statistical technique used for obtaining an improved model by reducing the generalization error of estimation, especially in high-dimensional problems. `1-regularization is a type of this technique with the appealing variable selection property which results in sparse model estimations. In this thesis, we study the use of regularization techniques for model learning in EDAs. Several methods for regularized model estimation in continuous domains based on a Gaussian distribution assumption are presented, and analyzed from di?erent aspects when used for optimization in a high-dimensional setting, where the population size of EDA has a logarithmic scale with respect to the number of variables. The optimization results obtained for a number of continuous problems with an increasing number of variables show that the proposed EDA based on regularized model estimation performs a more robust optimization, and is able to achieve signi?cantly better results for larger dimensions than other Gaussian-based EDAs. We also propose a method for learning a marginally factorized Gaussian Markov random ?eld model using regularization techniques and a clustering algorithm. The experimental results show notable optimization performance on continuous additively decomposable problems when using this model estimation method. Our study also covers multi-objective optimization and we propose joint probabilistic modeling of variables and objectives in EDAs based on Bayesian networks, speci?cally models inspired from multi-dimensional Bayesian network classi?ers. It is shown that with this approach to modeling, two new types of relationships are encoded in the estimated models in addition to the variable relationships captured in other EDAs: objectivevariable and objective-objective relationships. An extensive experimental study shows the e?ectiveness of this approach for multi- and many-objective optimization. With the proposed joint variable-objective modeling, in addition to the Pareto set approximation, the algorithm is also able to obtain an estimation of the multi-objective problem structure. Finally, the study of multi-objective optimization based on joint probabilistic modeling is extended to noisy domains, where the noise in objective values is represented by intervals. A new version of the Pareto dominance relation for ordering the solutions in these problems, namely ?-degree Pareto dominance, is introduced and its properties are analyzed. We show that the ranking methods based on this dominance relation can result in competitive performance of EDAs with respect to the quality of the approximated Pareto sets. This dominance relation is then used together with a method for joint probabilistic modeling based on `1-regularization for multi-objective feature subset selection in classi?cation, where six di?erent measures of accuracy are considered as objectives with interval values. The individual assessment of the proposed joint probabilistic modeling and solution ranking methods on datasets with small-medium dimensionality, when using two di?erent Bayesian classi?ers, shows that comparable or better Pareto sets of feature subsets are approximated in comparison to standard methods.


Fundación Dialnet

Mi Documat

Opciones de tesis

Opciones de compartir

Opciones de entorno