Ir al contenido

Documat


On Adjusted Viterbi Training

  • Autores: Alexey Koloydenko, Meelis Käärik, Jüri Lember
  • Localización: Acta applicandae mathematicae, ISSN 0167-8019, Vol. 96, Nº. 1-3, 2007, págs. 309-326
  • Idioma: inglés
  • DOI: 10.1007/s10440-007-9102-5
  • Texto completo no disponible (Saber más ...)
  • Resumen
    • The EM algorithm is a principal tool for parameter estimation in the hidden Markov models, where its efficient implementation is known as the Baum-Welch algorithm. This paper is however motivated by applications where EM is replaced by Viterbi training, or extraction (VT), also known as the Baum-Viterbi algorithm. VT is computationally less intensive and more stable, and has more of an intuitive appeal. However, VT estimators are also biased and inconsistent. Recently, we have proposed elsewhere the adjusted Viterbi training (VA), a new method to alleviate the above imprecision of the VT estimators while preserving the computational advantages of the baseline VT algorithm. The key difference between VA and VT is that asymptotically, the true parameter values are a fixed point of VA (and EM), but not of VT. We have previously studied VA for a special case of Gaussian mixtures, including simulations to illustrate its improved performance. The present work proves the asymptotic fixed point property of VA for general hidden Markov models.


Fundación Dialnet

Mi Documat

Opciones de artículo

Opciones de compartir

Opciones de entorno