Christoph Norbert Bergmeir
New methods, evaluation standards, and readily usable implementations for time series forecasting are investigated.
We develop regime-switching models trained with memetic algorithms as efficient time series modeling and forecasting procedures. They are hybrid methods of statistical models with Computational Intelligence (CI) optimization procedures. They have the advantages of mathematical soundness and interpretability, and can be accurate forecasters, as our study suggests.
The algorithm family of memetic algorithms with local search chains is a state-of-the-art CI optimization procedure. We use it not only for adjusting the parameters of regime-switching models, but also implement it as an R package, so that it can be used for global optimization by the R community. We show in a study that it is competitive and often better than many other implementations of optimization algorithms of R. We furthermore implement two more software packages for the R programming language, tsExpKit facilitates realization of structured, reproducible experiments for time-series forecasting, and RSNNS is a package which provides a neural network toolkit for the R community. It contains many standard implementations of network architectures R was lacking of, and it is pretty successful, with a considerable amount of users.
In our studies of predictor evaluation procedures, we perform an extensive study of the state of the art, and point out that in the case of pure autoregressive models for forecasting stationary time series, blocked cross-validation can be used without theoretical and practical problems.
When using CI methods for forecasting, this is by far the most common application use case, so that blocked cross-validation could become a standard procedure in the evaluation of CI methods for time series forecasting. Also, cross-validation is of particular interest when directional accuracy measures are used, as this kind of measures performs a binarization and therewith looses information, so that it may occur in practical applications that the traditional out-of-sample evaluation procedure is not capable any more of distinguishing models. Cross-validation can overcome this problem, as it uses more data to calculate the error measures.
© 2008-2024 Fundación Dialnet · Todos los derechos reservados