Home  | Publications | JWT+26

SmoothCLAP: Soft-Target Enhanced Contrastive Language-Audio Pretraining for Affective Computing

MCML Authors

Abstract

The ambiguity of human emotions poses several challenges for machine learning models, as they often overlap and lack clear delineating boundaries. Contrastive language-audio pretraining (CLAP) has emerged as a key technique for generalisable emotion recognition. However, as conventional CLAP enforces a strict one-to-one alignment between paired audio-text samples, it overlooks intra-modal similarity and treats all non-matching pairs as equally negative. This conflicts with the fuzzy boundaries between different emotions. To address this limitation, we propose SmoothCLAP, which introduces softened targets derived from intra-modal similarity and paralinguistic features. By combining these softened targets with conventional contrastive supervision, SmoothCLAP learns embeddings that respect graded emotional relationships, while retaining the same inference pipeline as CLAP. Experiments on eight affective computing tasks across English and German demonstrate that SmoothCLAP is consistently achieving superior performance. Our results highlight that leveraging soft supervision is a promising strategy for building emotion-aware audio-text models.

misc JWT+26


Preprint

Jan. 2026

Authors

X. JingJ. WangA. TriantafyllopoulosM. GerczukS. Amiriparian • J. Luo • B. W. Schuller

Links

arXiv

Research Area

 B3 | Multimodal Perception

BibTeXKey: JWT+26

Back to Top