Home  | News

21.03.2025

Teaser image to Explainable Multimodal Agents With Symbolic Representations & Can AI Be Less Biased?

Explainable Multimodal Agents With Symbolic Representations & Can AI Be Less Biased?

Ruotong Liao at United Nations AI for Good

More than 170 audiences visited the online lecture of our Junior Member Ruotong Liao, PhD student in the group of our PI Volker Tresp, on Monday, 17. March 2025, as an invited speaker at the United Nations "AI for Good".

With her talk "Perceive, Remember, and Predict: Explainable Multimodal Agents with Symbolic Representations," Ruotong Liao took part in the online event "Explainable Multimodal Agents with Symbolic Representations & Can AI be less biased?"

At the event, which was hosted by the leading platform for artificial intelligence for sustainable development, Ruotong Liao explained her research results, focussed on how the integration of temporal reasoning and symbolic knowledge about evolving events enables LLMs to make structured, interpretable, and context-sensitive predictions. Ruotong Liao presented work aimed at developing explainable multimodal agents capable of perceiving, storing, predicting, and justifying their conclusions over time.

See the whole presentation in the stream.

#event #research #tresp

Related

Link to Do Language Models Reason Like Humans?

16.04.2026

Do Language Models Reason Like Humans?

How do LLMs judge “if–then” statements? The paper accepted at EACL 2026 analyzes how probability and meaning shape LLM reasoning.

Read more
Link to MCML at CHI 2026

10.04.2026

MCML at CHI 2026

MCML researchers are represented with 6 papers at CHI 2026.

Read more
Link to MCML at ICPC 2026

10.04.2026

MCML at ICPC 2026

MCML researchers are represented with 1 paper at ICPC 2026.

Read more
Link to Nikita Araslanov Receives Prestigious Emmy Noether Grant

09.04.2026

Nikita Araslanov Receives Prestigious Emmy Noether Grant

Nikita Araslanov, MCML Junior Member, awarded Emmy Noether Grant to establish an independent AI research group at TUM.

Read more
Link to How AI Avatars Shape Perceived Fairness

02.04.2026

How AI Avatars Shape Perceived Fairness

Accepted at CHI 2026, this study shows how the race and gender of AI interview avatars shape perceptions of fairness and bias in automated hiring.

Read more
Back to Top