The EM algorithm is a principal tool for parameter estimation in the hidden Markov models, where its efficient implementation is known as the Baum-Welch algorithm. This paper is however motivated by applications where EM is replaced by Viterbi training, or extraction (VT), also known as the Baum-Viterbi algorithm. VT is computationally less intensive and more stable, and has more of an intuitive appeal. However, VT estimators are also biased and inconsistent. Recently, we have proposed elsewhere the adjusted Viterbi training (VA), a new method to alleviate the above imprecision of the VT estimators while preserving the computational advantages of the baseline VT algorithm. The key difference between VA and VT is that asymptotically, the true parameter values are a fixed point of VA (and EM), but not of VT. We have previously studied VA for a special case of Gaussian mixtures, including simulations to illustrate its improved performance. The present work proves the asymptotic fixed point property of VA for general hidden Markov models.
© 2008-2024 Fundación Dialnet · Todos los derechos reservados