Home  | Publications | WMZ+26

PAWS: Perception of Articulation in the Wild at Scale From Egocentric Videos

MCML Authors

Link to Profile Xi Wang

Xi Wang

Dr.

JRG Leader Egocentric Vision

Abstract

Articulation perception aims to recover the motion and structure of articulated objects (e.g., drawers and cupboards), and is fundamental to 3D scene understanding in robotics, simulation, and animation. Existing learning-based methods rely heavily on supervised training with high-quality 3D data and manual annotations, limiting scalability and diversity. To address this limitation, we propose PAWS, a method that directly extracts object articulations from hand-object interactions in large-scale in-the-wild egocentric videos. We evaluate our method on the public data sets, including HD-EPIC and Arti4D data sets, achieving significant improvements over baselines. We further demonstrate that the extracted articulations benefit downstream tasks, including fine-tuning 3D articulation prediction models and enabling robot manipulation.

misc WMZ+26


Preprint

Mar. 2026

Authors

Y. Wang • Y. Miao • W. Zhao • W. Yang • Z. Wang • J. Pajarinen • L. Van Gool • D. P. Paudel • J. Kannala • X. Wang • A. Solin

Links

arXiv GitHub

Research Area

 B1 | Computer Vision

BibTeXKey: WMZ+26

Back to Top