;
Juan Miguel Ortiz-de-Lazcano-Lobato
[1]
;
José David Fernández-Rodríguez
[1]
;
Ezequiel López-Rubio
[1]
;
Rosa María Maza-Quiroga
[1]
Málaga, España
, José Ramón Álvarez Sánchez (dir. congr.)
, Félix de la Paz López (dir. congr.)
, Hojjat Adeli (aut.), 2022, ISBN 978-3-031-06527-9, págs. 223-232Continual learning tries to address the stability-plasticity dilemma to avoid catastrophic forgetting when dealing with non-stationary distributions. Prior works focused on supervised or reinforcement learning, but few works have considered continual learning for unsupervised learning methods. In this paper, a novel approach to provide continual learning for competitive neural networks is proposed. To this end, we have proposed a different learning rate function that can cope with non-stationary distributions by adapting the model to learn continuously. Experimental results performed with different synthetic images that change over time confirm the performance of our proposal.
© 2008-2026 Fundación Dialnet · Todos los derechos reservados