Ir al contenido

Documat


Learning in real robots from environment interaction

  • Autores: Pablo Quintía Vidal, P. Iglesias, Miguel Ángel Rodríguez González Árbol académico, Carlos V. Regueiro Árbol académico, Fernando Valdés Villarrubia
  • Localización: JoPha: Journal of Physical Agents, ISSN-e 1888-0258, Vol. 6, Nº. 1, 2012 (Ejemplar dedicado a: Advances on physical agents), pág. 6
  • Idioma: inglés
  • DOI: 10.14198/jopha.2012.6.1.06
  • Enlaces
  • Resumen
    • This article describes a proposal to achieve fast robot learning from its interaction with the environment. Our proposal will be suitable for continuous learning procedures as it tries to limit the instability that appears every time the robot encounters a new situation it had not seen before. On the other hand, the user will not have to establish a degree of exploration (usual in reinforcement learning) and that would prevent continual learning procedures. Our proposal will use an ensemble of learners able to combine dynamic programming and reinforcement learning to predict when a robot will make a mistake. This information will be used to dynamically evolve a set of control policies that determine the robot actions.

  • Referencias bibliográficas
    • [1] B. Bakker, V. Zhumatiy, G. Gruener, J. Schmidhuber. Quasi-Online Reinforcement Learning for Robots. Proceedings of the International Conference...
    • [2] M. Rodriguez, R. Iglesias, C. V. Regueiro, J. Correa, and S. Barro, “Autonomous and fast robot learning through motivation,” Robotics...
    • [3] Pablo Quintia, Roberto Iglesias, Carlos V. Regueiro, Miguel A. Rodriguez, “Simultaneous learning of perception and action in mobile robots,”...
    • [4] M. A. Rodriguez, R. Iglesias, P. Quintia, C. V. Regueiro, “Parallel robot learning through an ensemble of predictors able to forecast...
    • [5] T. Kyriacou, R. Iglesias, M. Rodriguez, P. Quintia, “Unsupervised Complexity Reduction of Sensor Data for Robot Learning and Adaptation,”...
    • [6] Pablo Quintia, Roberto Iglesias, Miguel Rodriguez, Carlos Vazquez Regueiro, “Simultaneous learning of perceptions and actions in autonomous...
    • [7] R. S. Sutton, “Reinforcement learning: An introduction,” MIT Press, 1998.
    • [8] Thomas Kollar, Kicholas Roy, “Using reinforcement learning to improve exploration trajectories for error minimization,” Proceedings of...
    • [9] Andrea L. Thomaz, Guy Hoffman, Cynthia Breazeal, “Real-Time Interactive Reinforcement Learning for Robots,” AAAI 2005 Workshop on Human...
    • [10] G. A. Carpenter, S. Grossberg, D. B. Rosen, “Fuzzy ART: Fast stable learning and categorization of analog patterns by an adaptive resonance...
    • [11] M. Oubbati, B. Kord, and G. Palm, “Learning Robot-Environment Interaction Using Echo State Networks,” SAB 2010, LNAI 6226, pp. 501-510,...
    • [12] Amanda J.C. Sharkey, “Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems,” Springer, 1999.
    • [13] Lior Rokach, “Pattern classification using ensemble methods,” World Scientific, 2010.
    • [14] Player & Stage Project, [http://playerstage.sourceforge.net](http://playerstage.sourceforge.net). (Accessed on 26 February 2012).

Fundación Dialnet

Mi Documat

Opciones de artículo

Opciones de compartir

Opciones de entorno