Home  | Publications | JBL+25

Uncertainty-Aware Visual-Inertial SLAM With Volumetric Occupancy Mapping

MCML Authors

Stefan Leutenegger

Prof. Dr.

Principal Investigator

* Former Principal Investigator

Abstract

We propose visual-inertial simultaneous localization and mapping that tightly couples sparse reprojection errors, inertial measurement unit pre-integrals, and relative pose factors with dense volumetric occupancy mapping. Hereby depth predictions from a deep neural network are fused in a fully probabilistic manner. Specifically, our method is rigorously uncertainty-aware: first, we use depth and uncertainty predictions from a deep network not only from the robot's stereo rig, but we further probabilistically fuse motion stereo that provides depth information across a range of baselines, therefore drastically increasing mapping accuracy. Next, predicted and fused depth uncertainty propagates not only into occupancy probabilities but also into alignment factors between generated dense submaps that enter the probabilistic nonlinear least squares estimator. This submap representation offers globally consistent geometry at scale. Our method is thoroughly evaluated in two benchmark datasets, resulting in localization and mapping accuracy that exceeds the state of the art, while simultaneously offering volumetric occupancy directly usable for downstream robotic planning and control in real-time.

inproceedings


ICRA 2025

IEEE International Conference on Robotics and Automation. Atlanta, GA, USA, May 19-23, 2025.
Conference logo
A* Conference

Authors

J. Jung • S. Boche • S. B. Laina • S. Leutenegger

Links

DOI

Research Area

 B3 | Multimodal Perception

BibTeXKey: JBL+25

Back to Top