Ir al contenido

Documat


Engaging human-to-robot attention using conversational gestures and lip-synchronization

  • Autores: Felipe Andrés Cid Burgos, Luis Jesús Manso Fernández-Argüelles Árbol académico, Luis Vicente Calderita Estévez Árbol académico, Agustín Sánchez Domínguez, Pedro Miguel Núñez Trujillo Árbol académico
  • Localización: JoPha: Journal of Physical Agents, ISSN-e 1888-0258, Vol. 6, Nº. 1, 2012 (Ejemplar dedicado a: Advances on physical agents), pág. 2
  • Idioma: inglés
  • DOI: 10.14198/jopha.2012.6.1.02
  • Enlaces
  • Resumen
    • Human-Robot Interaction (HRI) is one of the most important subfields of social robotics. In several applications, text-to-speech (TTS) techniques are used by robots to provide feedback to humans. In this respect, a natural synchronization between the synthetic voice and the mouth of the robot could contribute to improve the interaction experience. This paper presents an algorithm for synchronizing Text-To-Speech systems with robotic mouths. The proposed approach estimates the appropriate aperture of the mouth based on the entropy of the synthetic audio stream provided by the TTS system. The paper also describes the cost-efficient robotic head which has been used in the experiments and introduces the use of conversational gestures for engaging Human-Robot Interaction. The system, which has been implemented in C++ and can perform in real- time, is freely available as part of the RoboComp open-source robotics framework. Finally, the paper presents the results of the opinion poll that has been conducted in order to evaluate the interaction experience.

  • Referencias bibliográficas
    • [1] W. Burgard, A. B. Cremers, D. Fox, D. Hahnel, G. Lakemeyer. “Experiences with an interactive museum tour-guide robot”. In Artificial Intelligence...
    • [2] F. Faber, M. Bennewitz, C. Eppner, A. Gorog, “The humanoid museum tour guide Robotinho”. In Proc. IEEE International Symposium on Robot...
    • [3] X. Ma and F. Quek, “Development of a Child-Oriented Social Robot for Safe and Interactive Physical Interaction”. In Proc. IEEE/RSJ International...
    • [4] T. Mukai, S. Hirano, H. Nakashima, Y. Kato, Y. Sakaida, S. Guo, and S. Hosoe, “Development of a Nursing-Care Assistant Robot RIBA That...
    • [5] P.Breen, E. Bowers, W. Welsh, “An investigation into the generation of mouth shapes for a talking head”. In Proc. of ICSLP 96, USA, pp....
    • [6] L.V. Calderita, P. Bachiller, J.P. Bandera, P. Bustos, and P. Nunez, ”MIMIC: A Human motion imitation component for RoboComp”. In Proc...
    • [7] T. Hashimoto, S. Hitramatsu, T. Tsuji, and H. Kobayashi, “Development of the Face Robot SAYA for Rich Facial Expressions”. In Proc. of...
    • [8] M.J. Mataric, J. Eriksson, D.J. Feil-Seifer1 and C.J. Winstein. “Socially assistive robotics for post-stroke rehabilitation”. In Journal...
    • [9] J.A. Prado, C. SimplÃNcio, N.F. Lori and J. Dias, ”Visuo-auditory Multimodal Emotional Structure to Improve Human-Robot-Interaction”,...
    • [10] J.P. Bandera, ”Vision-Based Gesture Recognition in a Robot Learning by Imitation Framework”, Ph.D. Thesis, University of Malaga, 2009.
    • [11] A. Aly and A. Tapus, ”Speech to Head Gesture Mapping in Multimodal Human-Robot Interaction”, In Proc. of the 5th European Conference...
    • [12] C. Breazeal and L. Aryananda, ”Recognition of Affective Communicative Intent in Robot-Directed Speech”, Artificial Intelligence, pp....
    • [13] T. Chen, “Audio-Visual Integration in multimodal Communication”. In IEEE Proceedings, May, 1998.
    • [14] M.K Lee, J. Forlizzi, P.E. Rybski, F. Crabbe, W. Chung, J. Finkle, E. Glaser, S. Kiesler, “The Snackbot: Documenting the Design of a...
    • [15] R. W. Picard, “Affective Computing”. MIT Press, pp. 88-91, 2000.
    • [16] J. Gómez, A. Ceballos, F. Prieto, T. Redarce, “Mouth Gesture and Voice Command Based Robot Command Interface”. In Proc. of IEEE International...
    • [17] Verbio Technologies, “Text to Speech (TTS) and Speech Recognition (ASR)”. Available at: [http://www.verbio.com](http://www.verbio.com).
    • [18] S. Anderson, D. Kewley-Port, “Evaluation of Speech Recognizers for Speech Training: Applications”, In IEEE Transactions on speech and...
    • [19] C. Jayawardena, I. H. Kuo, U. Unger, A. Igic, R. Wong, C. I. Watson, R. Q. Stafford, E. Broadbent, P. Tiwari, J. Warren, J. Sohn and...
    • [20] C. Shi, T. Kanda, M. Shimada, F. Yamaoka, H. Ishiguro and N. Hagita “Easy Development of Communication Behaviors in Social Robots”. In...
    • [21] W. Zhiliang, L. Yaofeng, J. Xiao, “The research of the humanoid robot with facial expressions for emotional interaction”. In Proc. First...
    • [22] P. Rybski, K. Yoon, J. Stolarz, M. Veloso, “Interactive Robot Task Training through Dialog and Demonstration”. In Proc. of HRI 2007,...
    • [23] K.-geune Oh, C.-yul Jung, Y.-gyu Lee, and S.-jong Kim, “Real Time Lip Synchronization between Text to Speech(TTS) System and Robot Mouth”....
    • [24] F. Hara, K. Endou, and S. Shirata, “Lip-Configuration control of a Mouth robot for Japanese vowels”. In Proc. IEEE International Workshop...
    • [25] A. Austermann, S. Yamada, K. Funakoshi, M. Nakano, “Similarities and Differences in Users Interaction with a Humanoid and a Pet Robot”....
    • [26] H. Kamide, Y Mae, T. Takubo, K. Ohara and T Arai. ”Development of a Scale of Perception to Humanoid Robots: PERNOD”. In IEEE/RSJ International...
    • [27] S. DiPaola, A. Arya, J. Chan, ”Simulating Face to Face Collaboration for Interactive Learning Systems”, In Proc: E-Learn 2005, Vancouver,...
    • [28] K. Waters and T.M. Levergood, ”DECface: An Automatic Lip-Synchronization Algorithm for Synthetic Faces”. In MULTIMEDIA TOOLS AND APPLICATIONS...
    • [29] Kaihui Mu, Jianhua Tao, J. che and M. Yang, ”Real-Time Speech-Driven Lip Synchronization”, In 4th International Universal Communication...
    • [30] J-L. Shen, J-W. Hung, L-S. Lee, ”Robust entropy-based endpoint detection for speech recognition in noisy environments”, In ICSLP-1998,...
    • [31] J.C. Junqua, B. Mak, and B. Reaves, ”A Robust Algorithm for Word Boundary Detection in the Presence of Noise”, In IEEE Trans. on Speech...
    • [32] M.H. Savoji, ”A Robust Algorithm for Accurate Endpointing of Speech”, In Speech Communication, Vol. 8, pp. 45-60, 1989.
    • [33] A. Tapus and M.j. Mataric, ”Emulating Empathy in Socially Assistive Robotics”, In AAAI Spring Symposium on Multidisciplinary Collaboration...
    • [34] A. Paiva, J. Dias, D. Sobral, R. Aylett, P. Sobreperez, S. Woods, C. Zoll and L. Hall, ”Caring for Agents and Agents that Care: Building...
    • [35] L.J. Manso, P. Bachiller, P. Bustos, P. Nuñez, R. Cintas and L. Calderita. ”RoboComp: a Tool-based Robo ics Framework”. In Simulation,...
    • [36] M. Siegel, C. Breazeal, and M. I. Norton, ”Persuasive Robotics: The influence of robot gender on human behavior”. In 2009 IEEE/RSJ International...
    • [37] H. Song, and D. Kwon, ”Design of a Robot Head with Arm-type Antennae for Emotional Expression”. In International Conference on Control,Automation...
    • [38] J. Cahn. ”Generation Expression in synthesized speech”. Master’s Thesis, MIT Media Lab. 1990

Fundación Dialnet

Mi Documat

Opciones de artículo

Opciones de compartir

Opciones de entorno