Home  | Publications | ASM+26

Featurising Pixels From Dynamic 3D Scenes With Linear In-Context Learners

MCML Authors

Abstract

One of the most exciting applications of vision models involve pixel-level reasoning. Despite the abundance of vision foundation models, we still lack representations that effectively embed spatio-temporal properties of visual scenes at the pixel level. Existing frameworks either train on image-based pretext tasks, which do not account for dynamic elements, or on video sequences for action-level reasoning, which does not scale to dense pixel-level prediction. We present a framework that learns pixel-accurate feature descriptors from videos, LILA. The core element of our training framework is linear in-context learning. LILA leverages spatio-temporal cue maps -- depth and motion -- estimated with off-the-shelf networks. Despite the noisy nature of those cues, LILA trains effectively on uncurated video datasets, embedding semantic and geometric properties in a temporally consistent manner. We demonstrate compelling empirical benefits of the learned representation across a diverse suite of vision tasks: video object segmentation, surface normal estimation and semantic segmentation.

misc ASM+26


Preprint

Apr. 2026

Authors

N. Araslanov • M. Sundermeyer • H. Matsuki • D. J. Tan • F. Tombari

Links

arXiv GitHub

Research Areas

 B1 | Computer Vision

 C1 | Medicine

BibTeXKey: ASM+26

Back to Top