Ir al contenido

Documat


Resumen de The UJI online robot: a distributed architecture for pattern recognition, autonomous grasping and augmented reality

Raúl Marín Árbol académico

  • The thesis has been developed at the Intelligent Robotics Laboratory of the University Jaume I (Spain). The objectives are focused on the laboratorys interest fields, which are Telerobotics, Human-Robot Interaction, Manipulation, Visual Servoing, and Service Robotics in general.

    Basically, the work has consisted of designing and implementing a whole vision based robotic system to control an educational robot via web, by using voice commands like "Grasp the object one" or "Grasp the cube". Our original objectives were upgraded to include the possibility of programming the robot using high level voice commands as well as very quick and significant mouse interactions (adjustable interaction levels). Besides this, the User interface has been designed to allow the operator to "predict" the robot movements before sending the programmed commands to the real robot ("Predictive system"). This kind of interface has the particularity of saving network bandwidth and even being used as a whole task specification off-line programming interface. By using a predictive virtual environment and giving more intelligence to the robot supposes a higher level of interaction, which avoids the cognitive fatigue associated with many teleoperated systems.

    The most important novel contributions included in this work are the following:

    1. Automatic Object Recognition: The system is able to recognize the objects in the robot scenario by using a camera as input (automatic object recognition). This feature allows the user to interact with the robot using high level commands like Grasp allen.

    2. Incremental Learning: Due to the fact that the object recognition procedure requires some kind of training before operating efficiently, the UJI Online Robot introduces the In-cremental Learning capability, that means the robot is always learning from the user in-teraction. It means the object recognition module performs better as time goes by.

    3. Autonomous Grasping: Once an object has been recognized in a scene, the following question is, how can we grasp it? The autonomous grasping module calculates the set of possible grasping points that can be used in order to manipulate an object according to the stability requirements.

    4. Non-Immersive Virtual Reality: In order to avoid the Internet latency and time-delay effects, the system offers a user interface based on non-immersive virtual reality. Hence, taken the camera data as input, a 3D virtual reality scenario is constructed, which allows specifying tasks that can be confirmed to the real robot in one step.

    5. Augmented Reality: The 3D virtual scenario is complemented with computer generated information that helps enormously to improve the human performance (e.g. projections of the gripper over the scene is shown, superposition of data in order to avoid robot occlusions, etc.). In some situations the user has more information by controlling the robot from the user interface (web based) than seeing the robot scenario directly.

    6. Task specification: The system permits specifying complete Pick & Place actions, which can be saved into a text file. This robot programming can be accomplished using both, the off-line and the on-line mode.

    7. Speech recognition/synthesis: To our knowledge this is the first online robot that allows the user to give high-level commands by using simply a microphone. Moreover, the speech synthesizer is integrated into the predictive display, in such a way that the robot re-sponds to the user and asks him/her for confirmation before sending the command to the real scenario.

    As explained at Chapter I, the novel contributions have been partially published in several sci-entific forums (journals, books, etc.). The most remarkable are for example the acceptance of two papers at the IEEE International Conference on Robotics and Automation 2002, and the publication of an extended article at the Special Issue on web telerobotics of the International Journal on Robotics and Automation (November 2002).

    We have proved the worth of the system by means of an application in the Education and Training domain. Almost one hundred undergraduate students have been using the web-based interface in order to program Pick and Place operations. The results are really encouraging (refer to Chapter VII) for more details. Although we are referring to the project as The UJI Online Robot, in the Educa-tion and Training domain The UJI Telerobotic Training System term has been used instead.

    Further work is planned to focus on applying Remote Visual Servoing techniques in order to improve the actual system performance. This would avoid having to spend long nights calibrating the robot and the cameras, as well extending the system capabilities to work on less structured environments.


Fundación Dialnet

Mi Documat