21.03.2025

©aiforgood
Explainable Multimodal Agents With Symbolic Representations & Can AI Be Less Biased?
Ruotong Liao at United Nations AI for Good
More than 170 audiences visited the online lecture of our Junior Member Ruotong Liao, PhD student in the group of our PI Volker Tresp, on Monday, 17. March 2025, as an invited speaker at the United Nations "AI for Good".
With her talk "Perceive, Remember, and Predict: Explainable Multimodal Agents with Symbolic Representations," Ruotong Liao took part in the online event "Explainable Multimodal Agents with Symbolic Representations & Can AI be less biased?"
At the event, which was hosted by the leading platform for artificial intelligence for sustainable development, Ruotong Liao explained her research results, focussed on how the integration of temporal reasoning and symbolic knowledge about evolving events enables LLMs to make structured, interpretable, and context-sensitive predictions. Ruotong Liao presented work aimed at developing explainable multimodal agents capable of perceiving, storing, predicting, and justifying their conclusions over time.
See the whole presentation in the stream.
Related

16.10.2025
SIC: Making AI Image Classification Understandable
SIC by the team of Christian Wachinger at ICCV 2025: Transparent AI for intuitive, reliable, and interpretable medical image classification.

14.10.2025
Industry Pitch Talks Recap
On Oct 7, MCML and Munich NLP hosted a Pitchtalks session on German dialect NLP and semantic search with 50+ participants.

09.10.2025
Rethinking AI in Public Institutions - Balancing Prediction and Capacity
Unai Fischer Abaigar explores how AI can make public decisions fairer, smarter, and more effective.

08.10.2025
MCML-LAMARR Workshop at University of Bonn
MCML and Lamarr researchers met in Bonn to exchange ideas on NLP, LLM finetuning, and AI ethics.