Home  | Publications | SMT+26

Supporting Understanding: Comparing Conversational Interviewing in Chatbot vs Human-Administered Patient-Reported Outcomes for Stroke Survivors

MCML Authors

Abstract

Hospitals use patient-reported outcome measurements (PROMs) to monitor stroke survivors post-discharge. Unfortunately, PROM items are prone to misinterpretation. Human-administered PROMs mitigate comprehension issues through conversational interviewing (CI), in which facilitators provide real-time clarification. However, chatbots have not yet adopted CI techniques, raising questions about whether CI-enabled chatbots can effectively support PROM administration. We conducted a controlled experiment with 18 stroke survivors comparing two CI clarification techniques —1) additional examples and 2) reflective prompts —against 3) no clarification (control) administered by both a chatbot and a human facilitator. Our findings demonstrate the feasibility of CI-enabled chatbots, as participants’ responses remained consistent across facilitators and clarification types. Participants exhibited better PROM answer behaviour when using the chatbot, with fewer anecdotal digressions and greater adherence to validated response options. Notably, when receiving CI support, participants stopped asking the human facilitator for clarification. Preferences for clarification varied by question type: participants favoured examples for physical activity questions, while no clarification was preferred for mental health–related items.

article SMT+26


ACM Transactions on Computing for Healthcare

Early Access. Feb. 2026.

Authors

M. M. Skovfoged • K. Mavromati • L. Tvrdá • T. Quinn • F. Körner • S. Chiesurin • V. Nemcova • R. Mikulik • H. Knoche

Links

DOI

Research Area

 B2 | Natural Language Processing

BibTeXKey: SMT+26

Back to Top