Home  | Publications | NKD25

CoBia: Constructed Conversations Can Trigger Otherwise Concealed Societal Biases in LLMs

MCML Authors

Abstract

Improvements in model construction, including fortified safety guardrails, allow Large language models (LLMs) to increasingly pass standard safety checks. However, LLMs sometimes slip into revealing harmful behavior, such as expressing racist viewpoints, during conversations. To analyze this systematically, we introduce CoBia, a suite of lightweight adversarial attacks that allow us to refine the scope of conditions under which LLMs depart from normative or ethical behavior in conversations. CoBia creates a constructed conversation where the model utters a biased claim about a social group. We then evaluate whether the model can recover from the fabricated bias claim and reject biased follow-up questions. We evaluate 11 open-source as well as proprietary LLMs for their outputs related to six socio-demographic categories that are relevant to individual safety and fair treatment, i.e., gender, race, religion, nationality, sex orientation, and others. Our evaluation is based on established LLM-based bias metrics, and we compare the results against human judgments to scope out the LLMs’ reliability and alignment. The results suggest that purposefully constructed conversations reliably reveal bias amplification and that LLMs often fail to reject biased follow-up questions during dialogue. This form of stress-testing highlights deeply embedded biases that can be surfaced through interaction.

inproceedings


EMNLP 2025

Conference on Empirical Methods in Natural Language Processing. Suzhou, China, Nov 04-09, 2025. To be published.
Conference logo
A* Conference

Authors

N. Nikeghbal • A. H. KargaranJ. Diesner

Links

GitHub

Research Areas

 B2 | Natural Language Processing

 C4 | Computational Social Sciences

BibTeXKey: NKD25

Back to Top