Home  | Publications | SZX+25a

L2RSI: Cross-View LiDAR-Based Place Recognition for Large-Scale Urban Scenes via Remote Sensing Imagery

MCML Authors

Abstract

We tackle the challenge of LiDAR-based place recognition, which traditionally depends on costly and time-consuming prior 3D maps. To overcome this, we first construct XA-L&RSI dataset, which encompasses approximately 110,000 remote sensing submaps and 13,000 LiDAR point cloud submaps captured in urban scenes, and propose a novel method, L2RSI, for cross-view LiDAR place recognition using high-resolution Remote Sensing Imagery. This approach enables large-scale localization capabilities at a reduced cost by leveraging readily available overhead images as map proxies. L2RSI addresses the dual challenges of cross-view and cross-modal place recognition by learning feature alignment between point cloud submaps and remote sensing submaps in the semantic domain. Additionally, we introduce a novel probability propagation method based on a dynamic Gaussian mixture model to refine position predictions, effectively leveraging temporal and spatial information. This approach enables large-scale retrieval and cross-scene generalization without fine-tuning. Extensive experiments on XA-L&RSI demonstrate that, within a 100km2 retrieval range, L2RSI accurately localizes 95.08% of point cloud submaps within a 30m radius for top-1 retrieved location. We provide a video to more vividly display the place recognition results of L2RSI at this https URL.

misc


Preprint

Mar. 2025

Authors

Z. Shi • X. Zhang • Y. Xia • Y. Zang • S. Shen • C. Wang

Links

GitHub

Research Area

 B1 | Computer Vision

BibTeXKey: SZX+25a

Back to Top