Home  | News

30.10.2025

Teaser image to Strength of gender biases in AI images varies across languages

Strength of Gender Biases in AI Images Varies Across Languages

Alexander Fraser Shows Text-to-Image Generators Reproduce and Magnify Role Stereotypes

The team of MCML PI Alexander Fraser together with researchers at TU Darmstadt have studied how text-to-image generators deal with gender stereotypes in various languages. The results show that the models not only reflect gender biases, but also amplify them. The direction and strength of the distortion depends on the language in question.

In social media, web searches and on posters: AI-generated images can now be found everywhere. Large language models (LLMs) such as ChatGPT are capable of converting simple input into deceptively realistic images. Researchers have now demonstrated that the generation of such artificial images not only reproduces gender biases, but actually magnifies them.

Models in Different Languages Investigated

The study explored models across nine languages and compared the results. Previous studies had generally focused only on English-language models. As a benchmark, the team developed the Multilingual Assessment of Gender Bias in Image Generation (MAGBIG). It is based on carefully controlled occupational designations. The study investigated four different types of prompts: direct prompts that use the ‘generic masculine’ in languages in which the generic term for an occupation is grammatically masculine (‘doctor‘), indirect descriptions (‘a person working as a doctor‘), explicitly feminine prompts (‘female doctor‘) and ‘gender star’ prompts (the German convention intended to create a gender-neutral designation by using an asterisk, e.g. ‘Ärzt*innen’ for doctors).

To make the results comparable, the researchers included languages in which the names of occupations are gendered, such as German, Spanish and French. In addition, the model incorporated languages such as English and Japanese that use only one grammatical gender but have gendered pronouns (‘her’, ‘his’). And finally, it included languages without grammatical gender: Korean and Chinese.

AI Images Perpetuate and Magnify Role Stereotypes

The results of the study show that direct prompts with the generic masculine show the strongest biases. For example, such occupations as ‘accountant’ produce mostly images of white males, while prompts referring to caregiving professions tend to generate female-presenting images. Gender-neutral or ‘gender-star’ forms only slightly mitigated these stereotypes, while images resulting from explicitly feminine prompts showed almost exclusively women. Along with the gender distribution, the researchers also analyzed how well the models understood and executed the various prompts. While neutral formulations were seen to reduce gender stereotypes, they also led to a lower quality of matches between the text input and the generated image.

“Our results clearly show that the language structures have a considerable influence on the balance and bias of AI image generators,” says Alexander Fraser, Professor for Data Analytics & Statistics at TUM Campus in Heilbronn. “Anyone using AI systems should be aware that different wordings may result in entirely different images and may therefore magnify or mitigate societal role stereotypes.”

“AI image generators are not neutral—they illustrate our prejudices in high resolution, and this depends crucially on language. Especially in Europe, where many languages converge, this is a wake-up call: fair AI must be designed with language sensitivity in mind,“ adds Kristian Kersting, co-director of hessian.AI and co-spokesperson for the ”Reasonable AI” cluster of excellence at TU Darmstadt.

Remarkably, bias varies across languages without a clear link to grammatical structures. For example, switching from French to Spanish prompts leads to a substantial increase in gender bias, despite both languages distinguishing in the same way between male and female occupational terms.

A* Conference
F. Friedrich • K. Hämmerl • P. Schramowski • M. Brack • J. Libovicky • K. Kersting • A. Fraser
Multilingual Text-to-Image Generation Magnifies Gender Stereotypes and Prompt Engineering May Not Help You.
ACL 2025 - 63rd Annual Meeting of the Association for Computational Linguistics. Vienna, Austria, Jul 27-Aug 01, 2025. DOI
#research #research-project #fraser

Related

Link to MCML Reaches h-Index of 100

20.03.2026

MCML Reaches H-Index of 100

MCML reaches an h-index of 100, marking a milestone achieved through years of collaboration with LMU Munich, TUM, and research partners worldwide.

Read more
Link to Teaching Models to Say ‘I’m Not Sure’

19.03.2026

Teaching Models to Say ‘I’m Not Sure’

Unified diffusion theory for images and text, bridging continuous and discrete models in one clear framework for generative AI.

Read more
Link to Frauke Kreuter becomes AAAS Fellow

12.03.2026

Frauke Kreuter becomes AAAS Fellow

MCML PI Frauke Kreuter has been elected a Fellow of the American Association for the Advancement of Science (AAAS).

Read more
Link to MCML Members Receive Best Paper Award at GOR 2026

12.03.2026

MCML Members Receive Best Paper Award at GOR 2026

Christoph Kern and Jan Simson received the Best Paper Award at the GOR 2026 Conference in Cologne together with Fiona Draxler and Samuel Mehr.

Read more
Link to MCML Members Receive Honourable Mention at CHI 2026

12.03.2026

MCML Members Receive Honourable Mention at CHI 2026

MCML PI Enkelejda Kasneci and MCML Junior Member Ka Hei Carrie Lau have received an Honourable Mention Award at CHI 2026 for their paper.

Read more
Back to Top