Home  | Publications | WMZ+25

Evaluating Zero-Shot Multilingual Aspect-Based Sentiment Analysis With Large Language Models

MCML Authors

Abstract

Aspect-based sentiment analysis (ABSA), a sequence labeling task, has attracted increasing attention in multilingual contexts. While previous research has focused largely on fine-tuning or training models specifically for ABSA, we evaluate large language models (LLMs) under zero-shot conditions to explore their potential to tackle this challenge with minimal task-specific adaptation. We conduct a comprehensive empirical evaluation of a series of LLMs on multilingual ABSA tasks, investigating various prompting strategies, including vanilla zero-shot, chain-of-thought (CoT), self-improvement, self-debate, and self-consistency, across nine different models. Results indicate that while LLMs show promise in handling multilingual ABSA, they generally fall short of fine-tuned, task-specific models. Notably, simpler zero-shot prompts often outperform more complex strategies, especially in high-resource languages like English. These findings underscore the need for further refinement of LLM-based approaches to effectively address ABSA task across diverse languages.

article


International Journal of Machine Learning and Cybernetics

Jun. 2025.

Authors

C. Wu • B. MaZ. Zhang • N. Deng • Y. He • Y. Xue

Links

DOI

Research Areas

 A1 | Statistical Foundations & Explainability

 C4 | Computational Social Sciences

BibTeXKey: WMZ+25

Back to Top