Multi-Modal Sensor Fusion for ASV Situational Awareness
PI Eric Coyle
An investigation into strategies and techniques for maritime object detection and classification using visual and spatial data with an emphasis on sensor fusion.
This project focuses on enhancing autonomous surface vessel (ASV) situational awareness through the fusion of visual and spatial sensing, aiming to improve the detection and classification of objects in the surrounding environment. Such technologies have applications in patrolling test ranges, enhancing harbor security, and using ASVs as support vessels for manned operations. The research is structured around four main objectives: creating and annotating multi-modal maritime data for sensor fusion, developing accurate surface maps for navigation, applying machine learning techniques for robust object identification, and creating sensor fusion strategies for improved robustness. The team uses a custom data acquisition system, which was used to create the open-source ER-Coast dataset. This dataset includes Light Detection and Ranging (LiDAR), high-resolution cameras, infrared cameras, and localization sensors to capture coastal waterways in Florida, both day and night, across 36 sequences. A portion of this data has been made publicly available for future LiDAR semantic segmentation, image segmentation, and object detection studies.
Research Dates
06/01/2022 to 12/31/2025