Ir al contenido

Documat


Boosting en el modelo de aprendizaje PAC

  • Mendoza, Ricardo [1]
    1. [1] Politécnico Grancolombiano

      Politécnico Grancolombiano

      Colombia

  • Localización: Elementos, ISSN-e 2248-5252, Vol. 3, Nº. 3, 2013 (Ejemplar dedicado a: Elementos), págs. 37-48
  • Idioma: español
  • DOI: 10.15765/e.v3i3.413
  • Títulos paralelos:
    • Boosting in the PAC leatning model
  • Enlaces
  • Resumen
    • español

      Una revisión de la idea de Boosting en el modelo de aprendizaje PAC es presentada. Adicionalmente se provee una revisión del primer método de Boosting práctico, el Boosting adaptativo (Adaboost), dando detalles respecto a las garantías teóricas en la convergencia del error y explorando el importante concepto de margen.

    • English

      A review on the idea of Boosting in the PAC learning model is presented. Also a review of the first practical Boosting method, the adaptative boosting (Adaboost) is provided, giving details concerning theoretical garantees on error convergence and exploring the important concept of margin.

  • Referencias bibliográficas
    • Schapire, R.E.: The strength of weak learnability. Machine learning. 5, 197–227
    • (1990).
    • Schapire, R.E., Freund, Y., Bartlett, P., Lee, W.S.: Boosting the margin: A new
    • explanation for the effectiveness of voting methods. The annals of statistics. 26,
    • –1686 (1998).
    • Maclin, R., Opitz, D.: Popular ensemble methods: An empirical study, (2011).
    • Recuperado de: http://arxiv.org/pdf/1106.0257.
    • Rudin, C., Daubechies, I., Schapire, R.E.: On the dynamics of boosting. NIPS Proceedings. (2003).
    • Schapire, R.E.: The boosting approach to machine learning: An overview. Lecture Notes in Statistics. Springer Verlag. 149–172 (2003).
    • Freund, Y., Schapire, R., Abe, N.: A short introduction to boosting. JournalJapanese Society For Artificial Intelligence. 14, 1612 (1999).
    • Sebastiani, F.: Machine learning in automated text categorization. ACM computing surveys (CSUR). 34, 1–47 (2002).
    • Long, P.M., Servedio, R.A., Anderson, R.N., Boulanger, A.: Systems and methods for martingale boosting in machine learning. Google Patents...
    • Godec, M., Grabner, H., Leistner, C., Bischof, H.: Speeding Up Semi-Supervised On-line Boosting for Tracking. (2010).
    • Chen, K., Wang, S.: Semi-supervised learning via regularized boosting working on multiple semi-supervised assumptions. Pattern Analysis and...
    • IEEE Transactions on. 33, 129–143 (2011).
    • Schapire, R.E., Freund, Y.: Boosting: Foundations and Algorithms. (2012).
    • Valiant, L.G.: A theory of the learnable. Communications of the ACM. 27, 1134–1142 (1984).
    • Pitt, L., Valiant, L.G.: Computational limitations on learning from examples. Journal of the ACM (JACM). 35, 965–984 (1988).
    • Kearns, M., Mansour, Y., Ng, A.Y., Ron, D.: An experimental and theoretical comparison of model selection methods. Machine Learning. 27, 7–50...
    • Haussler, D.: Probably Approximately Correct Learning. Proceedings of the Eighth National Conference on Artificial Intelligence (1990).
    • Vapnik, V., Chervonenkis, A.: Uniform convergence of frequencies of occurence of events to their probabilities. Dokl. Akad. Nauk SSSR. págs....
    • Blumer, A., Ehrenfeucht, A., Haussler, D., Warmuth, M.K.: Learnability and the Vapnik-Chervonenkis dimension. Journal of the ACM (JACM). 36,...
    • Pestov, V.: PAC learnability versus VC dimension: a footnote to a basic result of statistical learning. Neural Networks (IJCNN), The 2011...
    • Freund, Y., Schapire, R.: A desicion-theoretic generalization of on-line learning and an application to boosting. Computational learning theory....
    • Rudin, C., Daubechies, I., Schapire, R.E.: The dynamics of AdaBoost: Cyclic behavior and convergence of margins. The Journal of Machine Learning...
    • Merler, S., Caprile, B., Furlanello, C.: Parallelizing AdaBoost by weights dynamics.Computational statistics & data analysis. 51, 2487–2498...
    • Schapire, R.E.: The convergence rate of adaboost. The 23rd Conference on Learning Theory, open problem (2010).
    • Blanchard, G., Lugosi, G., Vayatis, N., others: On the rate of convergence of regularized boosting classifiers. The Journal of Machine Learning...
    • Reyzin, L., Schapire, R.E.: How boosting the margin can also boost classifier complexity. Proceedings of the 23rd international conference...
    • págs. 753–760 (2006).
    • Rosset, S., Zhu, J., Hastie, T.: Boosting as a regularized path to a maximum margin classifier. The Journal of Machine Learning Research. 5,...
    • Rudin, C., Cortes, C., Mohri, M., Schapire, R.: Margin-based ranking meets boosting in the middle. Learning Theory. 63–78 (2005).
    • Koltchinskii, V., Panchenko, D.: Empirical Margin Distributions and Bounding the Generalization Error of Combined Classifiers. Annals of Statistics....
    • Barak, O., Rigotti, M.: A simple derivation of a bound on the perceptron margin using singular value decomposition. Neural computation. 23,...
    • Shen, C., Li, H.: Boosting through optimization of margin distributions. Neural Networks, IEEE Transactions on. 21, 659–666 (2010).
    • Agapitos, A., O’Neill, M., Brabazon, A., Theodoridis, T.: Maximum margin decision surfaces for increased generalisation in evolutionary decision...
    • Freund, Y., Mason, L.: The alternating decision tree learning algorithm. MACHINE LEARNING - INTERNATIONAL WORKSHOP THEN CONFERENCE. págs....
    • Burges, C.J.C.: A tutorial on support vector machines for pattern recognition. Data mining and knowledge discovery. 2, 121–167 (1998).
    • Smola, A.J., Schölkopf, B.: Learning with kernels. MIT Press (1998).

Fundación Dialnet

Mi Documat

Opciones de artículo

Opciones de compartir

Opciones de entorno