Home  | Publications | Lan26a

Calibrated Epistemic Deference to Conversational AI in Mental Healthcare

MCML Authors

Link to Profile Benjamin Lange PI Matchmaking

Benjamin Lange

Dr.

JRG Leader Ethics of AI

Abstract

Sedlakova et al. (2025) argue that conversational AI (CAI) cannot satisfy the conditions of epistemic trust in therapeutic contexts. I accept their diagnosis but press a further question: given that patients will de facto treat CAI outputs as reasons, how should patients, clinicians, and designers calibrate their epistemic responses? I argue that even where epistemic trust is inappropriate, calibrated epistemic deference remains rational. On a total evidence view, CAI outputs function as defeasible contributory reasons whose weight is proportional to demonstrated reliability and context-sensitive. Against preemptionism, I show that preemptive CAI deference is distinctively objectionable in therapeutic contexts: it undermines precisely the epistemic capacities—self-knowledge, critical reflection, and reason-integration—that psychotherapy aims to develop. An open question is whether calibrated deference is dynamically stable given users' documented tendency to anthropomorphize CAI over extended interaction, which may cause deference to drift toward the quasi-trust that therapeutic contexts warrant resisting.

article Lan26a


American Journal of Bioethics

May. 2026. To be published. Preprint available.
Top Journal

Authors

B. Lange

Links

URL

Research Area

 C5 | Humane AI

BibTeXKey: Lan26a

Back to Top