Esteban José Palomo Ferrer , Juan Miguel Ortiz de Lazcano Lobato , David Fernández Rodríguez, Ezequiel López Rubio , María Maza
Continual learning tries to address the stability-plasticity dilemma to avoid catastrophic forgetting when dealing with non-stationary distributions. Prior works focused on supervised or reinforcement learning, but few works have considered continual learning for unsupervised learning methods. In this paper, a novel approach to provide continual learning for competitive neural networks is proposed. To this end, we have proposed a different learning rate function that can cope with non-stationary distributions by adapting the model to learn continuously. Experimental results performed with different synthetic images that change over time confirm the performance of our proposal.
© 2008-2024 Fundación Dialnet · Todos los derechos reservados