This study introduces a system implemented on a legged robot, designed to generate a multi-layered map that incorporates semantic information, specifically tailored for Search & Rescue robotics. The article discusses the development of a Machine Learning model based on visual data for recognizing people and environmental features, and its integration into a mapping and navigation architecture. The system was tested in two different locations using the Spot robot by Boston Dynamics, equipped with an external ZED2 depth camera. Tests are described in detail and results analyzed.
People, cracks, stairs, and doors: vision-based semantic mapping with a quadruped robot supporting first responders in Search & Rescue
Betta, Zoe;Recchiuto, Carmine Tommaso;Sgorbissa, Antonio
2024-01-01
Abstract
This study introduces a system implemented on a legged robot, designed to generate a multi-layered map that incorporates semantic information, specifically tailored for Search & Rescue robotics. The article discusses the development of a Machine Learning model based on visual data for recognizing people and environmental features, and its integration into a mapping and navigation architecture. The system was tested in two different locations using the Spot robot by Boston Dynamics, equipped with an external ZED2 depth camera. Tests are described in detail and results analyzed.File in questo prodotto:
Non ci sono file associati a questo prodotto.
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



