TOMSNAV: Topological Multi-modal and Semantic Navigation for Aerial Vehicles
JUNIOR STAR project in years 2025-2029 supported by the Czech Science Foundation (GAČR) under research project No. 25-17779M
Abstract:
The fundamental research project aims to explore how future mobile aerial robots could utilize their senses to detect and model their surrounding environment. We hypothesize that a significant methodological leap is necessary beyond the incremental improvements of current SLAM (Simultaneous Localization and Mapping) systems. The accuracy-focused SLAMs currently in use are limited by their real-time performance due to their complexity. We propose a biologically-inspired approach for multi-modal semantic feature extraction, which is gradually becoming feasible with advancements in visual object detection and classification. Our objective is to generalize robot navigation in real-world environments without relying on precise motion odometry. To achieve this, we intend to develop novel methods for learning and extracting multi-modal landmarks from sensor data. Additionally, we seek to address the fundamental challenge of sharing plans and paths between robots and humans in environments where there is no prior knowledge available about the environment and no notion of global coordinates.
In Media: