Gabriel Recatalá
This thesis focuses on the definition of a task for the determination, tracking and execution of a grasp on an unknown object. In particular, it is considered the case in which the object is ideally planar and the grasp has to be executed with a two-fingered, parallel-jaw gripper using vision as the source of sensor data. For the specification of this task, an architecture is defined that is based on three basic components -virtual sensors, filters, and actuators-, which can be connected to define a control loop. Each step in this task is analyzed separately, considering several options in some cases.
Some of the main contributions of this thesis include: (1) the use of a modular approach to the specification of a control task that provides a basic framework for supporting the concept of behavior; (2) the analysis of several strategies for obtaining a compact representation of the contour of an object; (3) the development of a method for the evaluation and search of a grasp on a planar object for a two-fingered gripper; (4) the specification of different representations of a grasp and the analysis of their use for tracking the grasp between different views of an object; (5) the specification of algorithms for the tracking of a grasp along the views of an object obtained from a sequence of single images and a sequence of stereo images; (6) the definition of parametrized models of the target position of the grasp points and of the feasibility of this target grasp, and of an off-line procedure for the computation of some of the reference values required by this model; and (7) the definition and analysis of a visual servoing control scheme to guide the gripper of a robot arm towards an unknown object using the grasp points computed for that object as control features.
© 2008-2024 Fundación Dialnet · Todos los derechos reservados