Home  | Publications | ZWP+25a

VGGSounder: Audio-Visual Evaluations for Foundation Models

MCML Authors

Abstract

The emergence of audio-visual foundation models underscores the importance of reliably assessing their multi-modal understanding. The VGGSound dataset is commonly used as a benchmark for evaluation audio-visual classification. However, our analysis identifies several limitations of VGGSound, including incomplete labelling, partially overlapping classes, and misaligned modalities. These lead to distorted evaluations of auditory and visual capabilities. To address these limitations, we introduce VGGSounder, a comprehensively re-annotated, multi-label test set that extends VGGSound and is specifically designed to evaluate audio-visual foundation models. VGGSounder features detailed modality annotations, enabling precise analyses of modality-specific performance. Furthermore, we reveal model limitations by analysing performance degradation when adding another input modality with our new modality confusion metric.

inproceedings ZWP+25a


ICCV 2025

IEEE/CVF International Conference on Computer Vision. Honolulu, Hawai'i, Oct 19-23, 2025.
Conference logo
A* Conference

Authors

D. Zverev • T. Wiedemer • A. Prabhu • M. Bethge • W. Brendel • A. S. Koepke

Links

DOI GitHub

In Collaboration

partnerlogo

Research Area

 B1 | Computer Vision

BibTeXKey: ZWP+25a

Back to Top