Home  | News

20.06.2025

Teaser image to Zooming In On Moments: ReVisionLLM for Long-Form Video Understanding

Zooming in on Moments: ReVisionLLM for Long-Form Video Understanding

MCML Research Insight - With Tanveer Hannan and Thomas Seidl

Imagine watching a two-hour video and trying to find the exact moment someone scores a goal - or says something important. Humans can do this with ease by skimming and zooming in. But for AI, finding specific moments in long videos is incredibly hard.

Most current AI systems struggle to handle more than a few minutes of video at a time. They often miss the forest for the trees - either losing detail or skipping over important moments. That's where ReVisionLLM comes in.

ReVisionLLM, developed by MCML Junior Member Tanveer Hannan, MCML Director Thomas Seidl and collaborators Md Mohaiminul Islam, Jindong Gu and Gedas Bertasius, mimics how humans search through content: by starting broad, identifying interesting segments, and then zooming in recursively to locate precise start and end points of an event.


«To our knowledge, ReVision-LLM is the first VLM capable of temporal grounding in hour-long videos.»


Tanveer Hannan

MCML Junior Member

Why Temporal Grounding Matters

From surveillance and sports analytics to educational video search, the ability to link language queries to exact video moments, called temporal grounding, is a major step toward intelligent video understanding. But scaling this to hours of footage is extremely hard due to memory limits, dense video data, and noisy confidence estimates.


Recursive, Just Like Us

ReVisionLLM uses a hierarchical strategy inspired by cognitive science. It processes video in layers, from coarse to fine, narrowing down the relevant segments at each stage. First, it identifies promising multi-minute chunks, then drills down to short spans of just a few seconds. This recursive structure allows the model to work efficiently without processing every single frame at once.

Recursive Video Grounding

Recursive Video Grounding: ReVisionLLM is a recursive vision-language model designed for localizing events in hourlong videos. Inspired by human search strategies, it first scans the entire video to identify relevant intermediate segments and then zooms in to precisely locate event boundaries. Here, we show one intermediate hierarchy for brevity.

VTimeLLM vs. ReVision-LLM

Existing vision-language models (VLMs) such as VTimeLLM are not equipped to process hour-long videos effectively and struggle to pinpoint precise temporal boundaries for events within extended video durations. In contrast, ReVision-LLM is the first VLM designed to address this limitation, enabling accurate temporal grounding in hour-long video content.


«Our model significantly outperforms previous state-of-the-art approaches, surpassing specialized models and other VLMs on multiple datasets by a substantial margin.»


Tanveer Hannan

MCML Junior Member

So, What Can It Do?

  • Handles videos of any length, from minutes to hours
  • Follows natural language instructions to locate events
  • Outperforms previous models on major benchmarks like MAD and VidChapters

With ReVisionLLM, machines are getting closer to understanding video the way humans do - patiently, efficiently, and with precision.


Curious to learn more about ReVisionLLM and how it is trained?

Check out the full paper presented at the prestigious A* conference CVPR 2025 - IEEE/CVF Conference on Computer Vision and Pattern Recognition:

A* Conference
T. Hannan • M. M. Islam • J. Gu • T. Seidl • G. Bertasius
ReVisionLLM: Recursive Vision-Language Model for Temporal Grounding in Hour-Long Videos.
CVPR 2025 - IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, TN, USA, Jun 11-15, 2025. DOI GitHub

Additional Material

Qualitative results on MAD

Qualitative results on MAD. ReVisionLLM accurately locates precise event boundaries that involve intricate actions (top) and complex visual details (bottom) within hour-long movies. In contrast, our VLM baseline fails entirely to capture these events.

The ReVisionLLM model

The ReVisionLLM model. (Left) First, we detect segments (e.g., a few minutes) from an hour-long video using sparse temporal features produced by the Hierarchical Adapter. (Right) Then ReVisionLLM produces a precise temporal boundary using dense temporal features within the predicted segments. Note that the green box represents the same event boundary in both sub-figures, zooming in from left to right. The multimodal encoder is omitted for simplicity.

Progressive Training Method

Progressive Training Method. Our model is trained progressively: first on short video segments and then on hour-long videos. (Left) In the first stage, the model learns to detect whether an event is present in the input video and, if so, predicts its precise start and endpoints. Sparse features help determine an event’s presence, while dense features additionally facilitate exact localization. (Right) In the second stage, we utilize the sparse features learned in Stage 1 to identify event segments within hour-long videos.


Share Your Research!


Get in touch with us!

Are you an MCML Junior Member and interested in showcasing your research on our blog?

We’re happy to feature your work—get in touch with us to present your paper.

#blog #research #seidl
Subscribe to RSS News feed

Related

Link to Zigzag Your Way to Faster, Smarter AI Image Generation

20.11.2025

Zigzag Your Way to Faster, Smarter AI Image Generation

ZigMa, introduced by Björn Ommer’s group at ECCV 24, improves high-res AI image and video generation with fast, memory-efficient zigzag scanning.

Link to Anne-Laure Boulesteix Among the World’s Most Cited Researchers

13.11.2025

Anne-Laure Boulesteix Among the World’s Most Cited Researchers

MCML PI Anne‑Laure Boulesteix named Highly Cited Researcher 2025 for cross-field work, among 17 LMU scholars recognized globally.

Link to Björn Ommer Featured in Frankfurter Rundschau

13.11.2025

Björn Ommer Featured in Frankfurter Rundschau

Björn Ommer highlights how Google’s new AI search mode impacts publishers, content visibility, and the diversity of online information.

Link to Fabian Theis Among the World’s Most Cited Researchers

13.11.2025

Fabian Theis Among the World’s Most Cited Researchers

Fabian Theis is named a Highly Cited Researcher 2025 for his work in mathematical modeling of biological systems.

Link to Explaining AI Decisions: Shapley Values Enable Smart Exosuits

13.11.2025

Explaining AI Decisions: Shapley Values Enable Smart Exosuits

AI meets wearable robotics: MCML and Harvard researchers make exosuits smarter and safer with explainable optimization, presented at ECML-PKDD 2025.

Back to Top