Ir al contenido

Documat


On the Performance of Deep Learning Models for Time Series Classification in Streaming

    1. [1] Universidad de Sevilla

      Universidad de Sevilla

      Sevilla, España

    2. [2] Universidad Pablo de Olavide

      Universidad Pablo de Olavide

      Sevilla, España

  • Localización: 15th International Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO 2020): Burgos, Spain ; September 2020 / coord. por Álvaro Herrero Cosío Árbol académico, Carlos Cambra Baseca Árbol académico, Daniel Urda Muñoz Árbol académico, Javier Sedano Franco Árbol académico, Héctor Quintián Pardo Árbol académico, Emilio Santiago Corchado Rodríguez Árbol académico, 2021, ISBN 978-3-030-57802-2, págs. 144-154
  • Idioma: inglés
  • Texto completo no disponible (Saber más ...)
  • Resumen
    • Processing data streams arriving at high speed requires the development of models that can provide fast and accurate predictions. Although deep neural networks are the state-of-the-art for many machine learning tasks, their performance in real-time data streaming scenarios is a research area that has not yet been fully addressed. Nevertheless, there have been recent efforts to adapt complex deep learning models for streaming tasks by reducing their processing rate. The design of the asynchronous dual-pipeline deep learning framework allows to predict over incoming instances and update the model simultaneously using two separate layers. The aim of this work is to assess the performance of different types of deep architectures for data streaming classification using this framework. We evaluate models such as multi-layer perceptrons, recurrent, convolutional and temporal convolutional neural networks over several time-series datasets that are simulated as streams. The obtained results indicate that convolutional architectures achieve a higher performance in terms of accuracy and efficiency.


Fundación Dialnet

Mi Documat

Opciones de artículo

Opciones de compartir

Opciones de entorno