Marine disasters pose significant risks to both victims and rescuers, often occurring in challenging conditions. To address this, we propose a robust perception system for human-in-water search and rescue, leveraging early fusion of thermal imaging and LiDAR data. The system employs the YOLOv8 deep neural network to detect and classify survivors from multi-source images, maintaining high reliability even in adverse environments. The framework has been designed, implemented, and evaluated on a newly collected real-world dataset, specifically created for this application. Additional data augmentation simulates harsh operational and environmental conditions to enhance robustness. Performance was assessed in terms of detection accuracy and computational efficiency. The proposed multi-sensor approach achieved a precision of 93.5 % and a recall of 94.2 %, outperforming single-sensor models and demonstrating superior generalization in complex scenarios. Additionally, it reduces computational cost by approximately 64 % compared to a late fusion strategy, supporting efficient real-time processing. These results confirm that the proposed system significantly improves perception capabilities while meeting real-time constraints, making it suitable for deployment in time-critical maritime rescue operations. By integrating autonomous sensing and intelligent processing, this work contributes to safer and more effective search and rescue missions at sea.
Human detection in marine disaster search and rescue scenario: a multi-modal early fusion approach
Filippo Ponzini;Michele Martelli
2025-01-01
Abstract
Marine disasters pose significant risks to both victims and rescuers, often occurring in challenging conditions. To address this, we propose a robust perception system for human-in-water search and rescue, leveraging early fusion of thermal imaging and LiDAR data. The system employs the YOLOv8 deep neural network to detect and classify survivors from multi-source images, maintaining high reliability even in adverse environments. The framework has been designed, implemented, and evaluated on a newly collected real-world dataset, specifically created for this application. Additional data augmentation simulates harsh operational and environmental conditions to enhance robustness. Performance was assessed in terms of detection accuracy and computational efficiency. The proposed multi-sensor approach achieved a precision of 93.5 % and a recall of 94.2 %, outperforming single-sensor models and demonstrating superior generalization in complex scenarios. Additionally, it reduces computational cost by approximately 64 % compared to a late fusion strategy, supporting efficient real-time processing. These results confirm that the proposed system significantly improves perception capabilities while meeting real-time constraints, making it suitable for deployment in time-critical maritime rescue operations. By integrating autonomous sensing and intelligent processing, this work contributes to safer and more effective search and rescue missions at sea.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



