Home  | Publications | CA26

Training-Free Text Emotion Tagging via LLM-Based Best-Worst Scaling

MCML Authors

Shahin Amiriparian

Shahin Amiriparian

Dr.

Abstract

Large Language Models (LLMs) have been frequently used as automatic annotators for tasks such as Text Emotion Recognition (TER). We consider a scenario in which annotators assign at least one emotion label from a large set of options to a text snippet. For this emotion tagging task, we propose a novel zero-shot algorithm that leverages Best-Worst Scaling (BWS), prompting the LLM to choose the least and most suitable emotions for a given text from several label subsets. The LLM’s choices can be represented by a graph linking labels via worse-than relations. Random walks on this graph yield the final score for each label. We compare our algorithm with naive prompting approaches as well as an established BWS-based method. Extensive experiments demonstrate the suitability of the method. It proves to compare favorably to the benchmarks in terms of both accuracy and calibration with respect to human annotations. Moreover, our algorithm’s automatic annotations are shown to be suitable for finetuning lightweight emotion classification models. The proposed method consumes considerably fewer computational resources than the established BWS approach.

inproceedings CA26


Findings @EACL 2026

Findings of the 19th Conference of the European Chapter of the Association for Computational Linguistics. Rabat, Morocco, Mar 24-29, 2026.
Conference logo

Authors

L. Christ • S. Amiriparian

Links

DOI

Research Area

 B3 | Multimodal Perception

BibTeXKey: CA26

Back to Top