Home  | Publications | LGX+26

TRASE: Tracking-Free 4D Segmentation and Editing

MCML Authors

Abstract

Understanding dynamic 3D scenes is crucial for extended reality (XR) and autonomous driving. Incorporating semantic information into 3D reconstruction enables holistic scene representations, unlocking immersive and interactive applications. To this end, we introduce TRASE, a novel tracking-free 4D segmentation method for dynamic scene understanding. TRASE learns a 4D segmentation feature field in a weakly-supervised manner, leveraging a soft-mined contrastive learning objective guided by SAM masks. The resulting feature space is semantically coherent and well-separated, and final object-level segmentation is obtained via unsupervised clustering. This enables fast editing, such as object removal, composition, and style transfer, by directly manipulating the scene's Gaussians. We evaluate TRASE on five dynamic benchmarks, demonstrating state-of-the-art segmentation performance from unseen viewpoints and its effectiveness across various interactive editing tasks.

inproceedings LGX+26


3DV 2026

13th International Conference on 3D Vision. Vancouver, Canada, Mar 20-23, 2026. To be published. Preprint available.

Authors

Y.-J. Li • M. GladkovaY. XiaD. Cremers

Links

URL

Research Area

 B1 | Computer Vision

BibTeXKey: LGX+26

Back to Top