Home  | Publications | AOW+21

4D Panoptic LiDAR Segmentation

MCML Authors

Laura Leal-Taixé

Prof. Dr.

Principal Investigator

* Former Principal Investigator

Abstract

Temporal semantic scene understanding is critical for self-driving cars or robots operating in dynamic environments. In this paper, we propose 4D panoptic LiDAR segmentation to assign a semantic class and a temporally-consistent instance ID to a sequence of 3D points. To this end, we present an approach and a point-centric evaluation metric. Our approach determines a semantic class for every point while modeling object instances as probability distributions in the 4D spatio-temporal domain. We process multiple point clouds in parallel and resolve point-to-instance associations, effectively alleviating the need for explicit temporal data association. Inspired by recent advances in benchmarking of multi-object tracking, we propose to adopt a new evaluation metric that separates the semantic and point-to-instance association aspects of the task. With this work, we aim at paving the road for future developments of temporal LiDAR panoptic perception.

inproceedings


CVPR 2021

IEEE/CVF Conference on Computer Vision and Pattern Recognition. Virtual, Jun 19-25, 2021.
Conference logo
A* Conference

Authors

M. Aygun • A. Ošep • M. Weber • M. Maximov • C. Stachniss • J. Behley • L. Leal-Taixé

Links

DOI GitHub

Research Area

 B1 | Computer Vision

BibTeXKey: AOW+21

Back to Top