Robotic scene understanding involves the ability of robots to interpret and analyse their surroundings, enabling them to navigate and interact with real-world environments effectively. This includes recognizing and responding to dynamic object interactions, which are crucial for tasks like human-robot collaboration. This presentation explores the latest advancements in robotic scene understanding and dynamic object interactions, notably using the Spot robot. We delve into innovative approaches that enable Spot to perform complex tasks such as opening drawers and grasping objects through deep learning frameworks. Additionally, we examine the development of dynamic scene graphs that store and enrich information, creating detailed and interconnected representations of environments. The talk also covers how interactions with light switches can be integrated into these scene graphs, leading to lightweight data structures that adapt to real-time changes. Furthermore, we introduce a robotic benchmark dataset designed to enhance visual localization techniques, providing a standardized platform for evaluation and improvement. These advancements collectively aim to push the boundaries of robotics, fostering more sophisticated and dynamic interactions with the environment.
© 2008-2024 Fundación Dialnet · Todos los derechos reservados