Victor Steinborn
Dr.
* Former Member
This thesis explores gender bias in Natural Language Processing (NLP) models, highlighting its negative societal impacts, such as discrimination in automated recruitment. While existing research largely focuses on English and occupational biases, this work expands the scope by addressing biases across different languages and contexts. The thesis presents three projects: (1) creating a multilingual dataset and a new bias evaluation measure, (2) examining how gender stereotypes in politeness affect cyberbullying detection in Korean and Japanese, and (3) analyzing how emoji-based visual representations influence biased text generation. These contributions aim to enhance fairness and inclusivity in NLP systems. (Shortened.)
BibTeXKey: Ste24