Home  | Publications | MYH+25

Algorithmic Fidelity of Large Language Models in Generating Synthetic German Public Opinions: A Case Study

MCML Authors

Abstract

In recent research, large language models (LLMs) have been increasingly used to investigate public opinions. This study investigates the algorithmic fidelity of LLMs, i.e., the ability to replicate the socio-cultural context and nuanced opinions of human participants. Using open-ended survey data from the German Longitudinal Election Studies (GLES), we prompt different LLMs to generate synthetic public opinions reflective of German subpopulations by incorporating demographic features into the persona prompts. Our results show that Llama performs better than other LLMs at representing subpopulations, particularly when there is lower opinion diversity within those groups. Our findings further reveal that the LLM performs better for supporters of left-leaning parties like The Greens and The Left compared to other parties, and matches the least with the right-party AfD. Additionally, the inclusion or exclusion of specific variables in the prompts can significantly impact the models' predictions. These findings underscore the importance of aligning LLMs to more effectively model diverse public opinions while minimizing political biases and enhancing robustness in representativeness.

inproceedings


ACL 2025

63rd Annual Meeting of the Association for Computational Linguistics. Vienna, Austria, Jul 27-Aug 01, 2025.
Conference logo
A* Conference

Authors

B. Ma • B. Yoztyurk • A.-C. HaenschX. Wang • M. Herklotz • F. KreuterB. PlankM. Aßenmacher

Links

URL

Research Areas

 A1 | Statistical Foundations & Explainability

 B2 | Natural Language Processing

 C4 | Computational Social Sciences

BibTeXKey: MYH+25

Back to Top