30.10.2025
©Terzo Algeri/Fotoatelier M/ TUM
Language Shapes Gender Bias in AI Images
TUM News
Alexander Fraser, MCML PI, and his team discovered that AI image generators reproduce gender stereotypes differently across languages. In their study of nine languages, they found that generic prompts like “accountant” mostly produced male images, while explicitly feminine or neutral prompts reduced bias but sometimes affected image quality.
The study highlights that AI is not language‑agnostic and careful wording can influence outcomes, underlining the need for fairness and multilingual awareness in AI systems.
Related
18.12.2025
"See, Don’t Assume": Revealing and Reducing Gender Bias in AI
ICLR 2025 research led by Zeynep Akata’s team reveals and reduces gender bias in popular vision-language AI models.
16.12.2025
Fabian Theis Featured in Handelsblatt on the Future of AI in Precision Medicine
MCML PI Fabian Theis discusses AI-driven precision medicine and its growing impact on individualized healthcare and biomedical research.
16.12.2025
Gitta Kutyniok Featured in VDI Nachrichten on AI Ethics
Gitta Kutyniok discusses measurable criteria for ethical AI, promoting safe and responsible autonomous decision-making.
16.12.2025
Hinrich Schütze Featured in WirtschaftsWoche on Innovative AI Approaches
Hinrich Schütze discusses Giotto.ai’s efficient AI models, highlighting memory separation and context-aware decoding to improve robustness.