Home  | Publications | Ste24

Multilingual and Multimodal Bias Probing and Mitigation in Natural Language Processing

MCML Authors

Victor Steinborn

Dr.

Abstract

This thesis explores gender bias in Natural Language Processing (NLP) models, highlighting its negative societal impacts, such as discrimination in automated recruitment. While existing research largely focuses on English and occupational biases, this work expands the scope by addressing biases across different languages and contexts. The thesis presents three projects: (1) creating a multilingual dataset and a new bias evaluation measure, (2) examining how gender stereotypes in politeness affect cyberbullying detection in Korean and Japanese, and (3) analyzing how emoji-based visual representations influence biased text generation. These contributions aim to enhance fairness and inclusivity in NLP systems. (Shortened.)

phdthesis


Dissertation

LMU München. Apr. 2024

Authors

V. Steinborn

Links

DOI

Research Area

 B2 | Natural Language Processing

BibTeXKey: Ste24

Back to Top