21.03.2025
©aiforgood
Explainable Multimodal Agents With Symbolic Representations & Can AI Be Less Biased?
Ruotong Liao at United Nations AI for Good
More than 170 audiences visited the online lecture of our Junior Member Ruotong Liao, PhD student in the group of our PI Volker Tresp, on Monday, 17. March 2025, as an invited speaker at the United Nations "AI for Good".
With her talk "Perceive, Remember, and Predict: Explainable Multimodal Agents with Symbolic Representations," Ruotong Liao took part in the online event "Explainable Multimodal Agents with Symbolic Representations & Can AI be less biased?"
At the event, which was hosted by the leading platform for artificial intelligence for sustainable development, Ruotong Liao explained her research results, focussed on how the integration of temporal reasoning and symbolic knowledge about evolving events enables LLMs to make structured, interpretable, and context-sensitive predictions. Ruotong Liao presented work aimed at developing explainable multimodal agents capable of perceiving, storing, predicting, and justifying their conclusions over time.
See the whole presentation in the stream.
Related
13.11.2025
Anne-Laure Boulesteix Among the World’s Most Cited Researchers
MCML PI Anne‑Laure Boulesteix named Highly Cited Researcher 2025 for cross-field work, among 17 LMU scholars recognized globally.
13.11.2025
Björn Ommer Featured in Frankfurter Rundschau
Björn Ommer highlights how Google’s new AI search mode impacts publishers, content visibility, and the diversity of online information.
13.11.2025
Fabian Theis Among the World’s Most Cited Researchers
Fabian Theis is named a Highly Cited Researcher 2025 for his work in mathematical modeling of biological systems.
13.11.2025
Explaining AI Decisions: Shapley Values Enable Smart Exosuits
AI meets wearable robotics: MCML and Harvard researchers make exosuits smarter and safer with explainable optimization, presented at ECML-PKDD 2025.