15.02.2022

MCML Researchers With Two Papers at AAAI 2022
36th Conference on Artificial Intelligence (AAAI 2022). Virtual, 22.02.2022–01.03.2022
We are happy to announce that MCML researchers are represented with two papers at AAAI 2022. Congrats to our researchers!
Main Track (2 papers)
TLogic: Temporal logical rules for explainable link forecasting on temporal knowledge graphs.
AAAI 2022 - 36th Conference on Artificial Intelligence. Virtual, Feb 22-Mar 01, 2022. DOI
Abstract
Conventional static knowledge graphs model entities in relational data as nodes, connected by edges of specific relation types. However, information and knowledge evolve continuously, and temporal dynamics emerge, which are expected to influence future situations. In temporal knowledge graphs, time information is integrated into the graph by equipping each edge with a timestamp or a time range. Embedding-based methods have been introduced for link prediction on temporal knowledge graphs, but they mostly lack explainability and comprehensible reasoning chains. Particularly, they are usually not designed to deal with link forecasting – event prediction involving future timestamps. We address the task of link forecasting on temporal knowledge graphs and introduce TLogic, an explainable framework that is based on temporal logical rules extracted via temporal random walks. We compare TLogic with state-of-the-art baselines on three benchmark datasets and show better overall performance while our method also provides explanations that preserve time consistency. Furthermore, in contrast to most state-of-the-art embedding-based methods, TLogic works well in the inductive setting where already learned rules are transferred to related datasets with a common vocabulary.
MCML Authors
Improving Scene Graph Classification by Exploiting Knowledge from Texts.
AAAI 2022 - 36th Conference on Artificial Intelligence. Virtual, Feb 22-Mar 01, 2022. DOI
Abstract
Training scene graph classification models requires a large amount of annotated image data. Meanwhile, scene graphs represent relational knowledge that can be modeled with symbolic data from texts or knowledge graphs. While image annotation demands extensive labor, collecting textual descriptions of natural scenes requires less effort. In this work, we investigate whether textual scene descriptions can substitute for annotated image data. To this end, we employ a scene graph classification framework that is trained not only from annotated images but also from symbolic data. In our architecture, the symbolic entities are first mapped to their correspondent image-grounded representations and then fed into the relational reasoning pipeline. Even though a structured form of knowledge, such as the form in knowledge graphs, is not always available, we can generate it from unstructured texts using a transformer-based language model. We show that by fine-tuning the classification pipeline with the extracted knowledge from texts, we can achieve ~8x more accurate results in scene graph classification, ~3x in object classification, and ~1.5x in predicate classification, compared to the supervised baselines with only 1% of the annotated images.
MCML Authors
15.02.2022
Related

22.07.2025
Eyke Hüllermeier to Lead New DFG-Funded Research Training Group METEOR
MCML PI Eyke Hüllermeier to lead new DFG-funded RTG METEOR, uniting ML and control theory to build robust, explainable AI systems.

16.07.2025
AI-Powered Cortical Mapping for Neurodegenerative Disease Diagnoses - With Christian Wachinger
Research film with Christian Wachinger shows how AI maps the brain’s cortex to support diagnoses of neurodegenerative diseases.

©TUM
15.07.2025
ERC Proof of Concept Grants for Fabian Theis
Fabian Theis receives ERC Proof of Concept Grants for his project on deep learning methods for dynamic single-cell data analysis.

10.07.2025
Beyond Prediction: How Causal AI Enables Better Decision-Making - With Stefan Feuerriegel
Stefan Feuerriegel in our new film shows how Causal AI helps pick better actions by predicting outcomes for each possible decision.