Home  | News

26.10.2025

Tiny logo
Teaser image to CoProU-VO Wins GCPR 2025 Best Paper Award

CoProU-VO Wins GCPR 2025 Best Paper Award

Award-Winning Work by MCML Director Daniel Cremers and His Team for Advances in Unsupervised Visual Odometry

The paper "CoProU-VO: Combining Projected Uncertainty for End-to-End Unsupervised Monocular Visual Odometry" by MCML Director Daniel Cremers and Junior Members Weirong Chen and Johannes Meier, together with Jingchao Xie, Oussema Dhaouadi, and Jacques Kaiser received the Best Paper Award at GCPR 2025.

Their work presents CoProU-VO, an end-to-end unsupervised visual odometry framework that propagates and combines uncertainty across temporal frames to improve robustness in dynamic scenes. Built on vision transformer backbones, it jointly learns depth, uncertainty, and camera poses, achieving state-of-the-art performance on KITTI and nuScenes datasets.

Congratulations to the team on this outstanding achievement!

Check out the full paper:

J. Xie • O. DhaouadiW. ChenJ. Meier • J. Kaiser • D. Cremers
CoProU-VO: Combining Projected Uncertainty for End-to-End Unsupervised Monocular Visual Odometry.
GCPR 2025 - German Conference on Pattern Recognition. Freiburg, Germany, Oct 23-26, 2025. Best Paper Award. To be published. Preprint available. arXiv
#award #research #cremers
Subscribe to RSS News feed

Related

Link to World’s First Complete 3D Model of All Buildings Released

04.12.2025

World’s First Complete 3D Model of All Buildings Released

Xiaoxiang Zhu’s team releases GlobalBuildingAtlas, a high-res 3D map of 2.75B buildings for advanced urban and climate analysis.

Link to When to Say "I’m Not Sure": Making Language Models More Self-Aware

04.12.2025

When to Say "I’m Not Sure": Making Language Models More Self-Aware

ICLR 2025 research by the groups of David Rügamer, and Bernd Bischl introduces methods to make LLMs more reliable by expressing uncertainty.

Link to

28.11.2025

MCML at NeurIPS 2025

MCML researchers are represented with 46 papers at NeurIPS 2025 (37 Main, and 9 Workshops).

Link to Seeing the Bigger Picture – One Detail at a Time

27.11.2025

Seeing the Bigger Picture – One Detail at a Time

FLAIR, introduced by Zeynep Akata’s group at CVPR 2025, brings fine-grained, text-guided detail recognition to vision-language models.

Link to Daniel Rückert Among the World’s Most Cited Researchers

25.11.2025

Daniel Rückert Among the World’s Most Cited Researchers

MCML Director Daniel Rückert is among the world’s most cited researchers for AI in healthcare, part of 17 TUM scientists recognized in 2025.

Back to Top