Computer vision is one of the most challenging applications in sensor systems since the signal is complex from spatial and logical point of view. Due to these characteristics vision applications require high computing resources, which makes them especially difficult to use in embedded systems, like mobile robots with reduced amount memory and computing power. In this work a distributed architecture for humanoid visual control is presented using specific nodes for vision processing cooperating with the main CPU to coordinate the movements of the exploring behaviours. This architecture provides additional computing resources in a reduced area, without disturbing tasks related with low level control (mainly kinematics) with the ones involving vision processing algorithms.
The information is exchanged allowing to linking control loops between both nodes.
© 2008-2024 Fundación Dialnet · Todos los derechos reservados