21.03.2025
©aiforgood
Explainable Multimodal Agents With Symbolic Representations & Can AI Be Less Biased?
Ruotong Liao at United Nations AI for Good
More than 170 audiences visited the online lecture of our Junior Member Ruotong Liao, PhD student in the group of our PI Volker Tresp, on Monday, 17. March 2025, as an invited speaker at the United Nations "AI for Good".
With her talk "Perceive, Remember, and Predict: Explainable Multimodal Agents with Symbolic Representations," Ruotong Liao took part in the online event "Explainable Multimodal Agents with Symbolic Representations & Can AI be less biased?"
At the event, which was hosted by the leading platform for artificial intelligence for sustainable development, Ruotong Liao explained her research results, focussed on how the integration of temporal reasoning and symbolic knowledge about evolving events enables LLMs to make structured, interpretable, and context-sensitive predictions. Ruotong Liao presented work aimed at developing explainable multimodal agents capable of perceiving, storing, predicting, and justifying their conclusions over time.
See the whole presentation in the stream.
Related
©Gorodenkoff-stock.adobe.com
03.11.2025
Research on Human-Centred Exosuit Technology Highlighted in Börsen-Zeitung
Julian Rodemann worked with Harvard on interpretable algorithms for “Back Exosuits,” improving human–machine interaction.
02.11.2025
MCML at EMNLP 2025
MCML researchers are represented with 37 papers at EMNLP 2025 (17 Main, 13 Findings, and 7 Workshops).
©Terzo Algeri/Fotoatelier M/ TUM
30.10.2025
Language Shapes Gender Bias in AI Images
Alexander Fraser shows AI image generators reproduce gender stereotypes differently across languages, highlighting the need for fair multilingual AI.
26.10.2025
Barbara Plank Featured on ARD
MCML PI Barbara Plank featured on ARD, highlighting AI challenges in understanding regional dialects.