Home  | Publications | BIY+25

Privacy-Preserving Federated Learning for Hate Speech Detection

MCML Authors

Abstract

This paper presents a federated learning system with differential privacy for hate speech detection, tailored to low-resource languages. By fine-tuning pre-trained language models, ALBERT emerged as the most effective option for balancing performance and privacy. Experiments demonstrated that federated learning with differential privacy performs adequately in low-resource settings, though datasets with fewer than 20 sentences per client struggled due to excessive noise. Balanced datasets and augmenting hateful data with non-hateful examples proved critical for improving model utility. These findings offer a scalable and privacy-conscious framework for integrating hate speech detection into social media platforms and browsers, safeguarding user privacy while addressing online harm.

inproceedings


SRW @NAACL 2025

Student Research Workshop at the Annual Conference of the North American Chapter of the Association for Computational Linguistics. Albuquerque, NM, USA, Apr 29-May 04, 2025.

Authors

I. d. S. Bueno Júnior • H. YeA. WisiorekH. Schütze

Links

DOI

Research Area

 B2 | Natural Language Processing

BibTeXKey: BIY+25

Back to Top