Home  | Publications | Gdf 25

Can Prompting LLMs Unlock Hate Speech Detection Across Languages? a Zero-Shot and Few-Shot Study

MCML Authors

Abstract

Despite growing interest in automated hate speech detection, most existing approaches overlook the linguistic diversity of online content. Multilingual instruction-tuned large language models such as LLaMA, Aya, Qwen, and BloomZ offer promising capabilities across languages, but their effectiveness in identifying hate speech through zero-shot and few-shot prompting remains underexplored. This work evaluates LLM prompting-based detection across eight non-English languages, utilizing several prompting techniques and comparing them to fine-tuned encoder models. We show that while zero-shot and few-shot prompting lag behind fine-tuned encoder models on most of the real-world evaluation sets, they achieve better generalization on functional tests for hate speech detection. Our study also reveals that prompt design plays a critical role, with each language often requiring customized prompting techniques to maximize performance.

inproceedings GDF+25


WOAH @ACL 2025

9th Workshop on Online Abuse and Harms at the 63rd Annual Meeting of the Association for Computational Linguistics. Vienna, Austria, Jul 27-Aug 01, 2025.

Authors

F. GhorbanpourD. DementievaA. Fraser

Links

URL

Research Area

 B2 | Natural Language Processing

BibTeXKey: GDF+25

Back to Top