Home  | News

22.09.2025

Teaser image to Avatar’s presence inspires trust

Avatar’s Presence Inspires Trust

Nassir Navab Shows How Avatars Reduce Stress in Autonomous Ultrasound Exams

Patients have more confidence in autonomous robotic ultrasound systems when an avatar guides them through the process. This was discovered by our PI Nassir Navab, Chair for Computer Aided Medical Procedures & Augmented Reality at TUM. The virtual agent explains what it is doing, answers questions and can speak any language. Such systems are intended especially for use in regions where there is a shortage of doctors.

A large screen, virtual reality glasses, a robotic arm with an ultrasound head and a powerful computer: that is the equipment needed by TUM researcher Tianyu Song from the Chair of Informatics Applications in Medicine for the autonomous examination of the aorta, carotid artery or forearm arteries. To help overcome possible doubts about the autonomous technical system, researchers have now created a virtual environment in which an avatar guides patients through the examination procedure. After putting on VR glasses, patients see an avatar that leads the conversation and answers questions. “This makes the whole process more human and friendly,” says Nassir Navab, the head of the research chair. “And this has been proven to reduce stress among users of autonomous systems.”

Virtual Environment Reduces Patients’ Stress Levels

To find out more, the researchers compared the stress levels of 14 male and female patients of varying ages. Three of the four scenarios were more or less virtually supported. In one scenario, an avatar was used in a real environment, another in a virtual environment in which real elements are superimposed, and the third in a completely virtual environment. These were compared with an avatar-free, purely real variant. The researchers wired the test subjects with sensors for an electrocardiogram (ECG) to record heart rate variability. “The more this value drops during treatment, the higher the stress level of the person being treated,” explains Tianyu Song. The result: all three virtually supported scenarios proved significantly less stressful than the non-virtual treatment. When asked which of the three virtually supported scenarios they trusted the most and which felt best, the avatar in a real environment came out on top. “That’s why we’re now using it for demonstrations,” says Nassir Navab, whose research is supported by the Bavarian Research Foundation as part of the ForNeRo research project (Research Network – Seamless and Ergonomic Integration of Robotics into Clinical Workflows).

Large Language Model Masters Accents

The main reason for the reduced stress levels of those being treated is the avatar, which usually has a female voice in the department’s demonstrations and walks the patient through the examination. It holds the ultrasound probe and guides it to the arm. It also talks to the patient. To make this possible, software converts the patient’s questions into text before a large language model finds suitable answers based on pre-formulated instructions, which are then converted back into spoken words. “An important trust-building factor is the fact that the avatar not only speaks different languages, but can even do so in regional accents,” says researcher Song. For example, the language model can speak with an Austrian accent or even German with an American accent. The avatar can also communicate non-verbally. It gestures, pauses briefly between sentences, and turns to face patients when they speak.

#research #research-project #navab
Subscribe to RSS News feed

Related

Link to COSMOS – Teaching Vision-Language Models to Look Beyond the Obvious

19.02.2026

COSMOS – Teaching Vision-Language Models to Look Beyond the Obvious

Presented at CVPR 2025, COSMOS shows how smarter training helps VLMs learn from details and context, improving AI understanding without larger models.

Read more
Link to Needle in a Haystack: Finding Exact Moments in Long Videos

05.02.2026

Needle in a Haystack: Finding Exact Moments in Long Videos

ECCV 2024 research introduces RGNet, an AI model that finds exact moments in long videos using unified retrieval and grounding.

Read more
Link to Benjamin Busam Leads Design of Bavarian Earth Observation Satellite Network “CuBy”

04.02.2026

Benjamin Busam Leads Design of Bavarian Earth Observation Satellite Network “CuBy”

Benjamin Busam leads the scientific design of the “CuBy” satellite network, delivering AI-ready Earth observation data for Bavaria.

Read more
Link to Cracks in the foundations of cosmology

30.01.2026

Cracks in the Foundations of Cosmology

Daniel Grün examines cosmological tensions that challenge the Standard Model and may point toward new physics.

Read more
Link to How Machines Can Discover Hidden Rules Without Supervision

29.01.2026

How Machines Can Discover Hidden Rules Without Supervision

ICLR 2025 research shows how self-supervised learning uncovers hidden system dynamics from unlabeled, high-dimensional data.

Read more
Back to Top