Home  | Publications | YNT+25

Hateful Person or Hateful Model? Investigating the Role of Personas in Hate Speech Detection by Large Language Models

MCML Authors

Link to Profile Hinrich Schütze PI Matchmaking

Hinrich Schütze

Prof. Dr.

Principal Investigator

Abstract

Hate speech detection is a socially sensitive and inherently subjective task, with judgments often varying based on personal traits. While prior work has examined how socio-demographic factors influence annotation, the impact of personality traits on Large Language Models (LLMs) remains largely unexplored. In this paper, we present the first comprehensive study on the role of persona prompts in hate speech classification, focusing on MBTI-based traits. A human annotation survey confirms that MBTI dimensions significantly affect labeling behavior. Extending this to LLMs, we prompt four open-source models with MBTI personas and evaluate their outputs across three hate speech datasets. Our analysis uncovers substantial persona-driven variation, including inconsistencies with ground truth, inter-persona disagreement, and logit-level biases. These findings highlight the need to carefully define persona prompts in LLM-based annotation workflows, with implications for fairness and alignment with human values.

misc


Preprint

Jun. 2025

Authors

S. Yuan • E. Nie • M. Tawfelis • H. Schmid • H. Schütze • M. Färber

Links


In Collaboration

partnerlogo

Research Area

 B2 | Natural Language Processing

BibTeXKey: YNT+25

Back to Top