15.02.2022

MCML Researchers With Two Papers at AAAI 2022
36th Conference on Artificial Intelligence (AAAI 2022). Virtual, 22.02.2022–01.03.2022
We are happy to announce that MCML researchers are represented with two papers at AAAI 2022. Congrats to our researchers!
Main Track (2 papers)
TLogic: Temporal logical rules for explainable link forecasting on temporal knowledge graphs.
AAAI 2022 - 36th Conference on Artificial Intelligence. Virtual, Feb 22-Mar 01, 2022. DOI
Abstract
Conventional static knowledge graphs model entities in relational data as nodes, connected by edges of specific relation types. However, information and knowledge evolve continuously, and temporal dynamics emerge, which are expected to influence future situations. In temporal knowledge graphs, time information is integrated into the graph by equipping each edge with a timestamp or a time range. Embedding-based methods have been introduced for link prediction on temporal knowledge graphs, but they mostly lack explainability and comprehensible reasoning chains. Particularly, they are usually not designed to deal with link forecasting – event prediction involving future timestamps. We address the task of link forecasting on temporal knowledge graphs and introduce TLogic, an explainable framework that is based on temporal logical rules extracted via temporal random walks. We compare TLogic with state-of-the-art baselines on three benchmark datasets and show better overall performance while our method also provides explanations that preserve time consistency. Furthermore, in contrast to most state-of-the-art embedding-based methods, TLogic works well in the inductive setting where already learned rules are transferred to related datasets with a common vocabulary.
MCML Authors
Improving Scene Graph Classification by Exploiting Knowledge from Texts.
AAAI 2022 - 36th Conference on Artificial Intelligence. Virtual, Feb 22-Mar 01, 2022. DOI
Abstract
Training scene graph classification models requires a large amount of annotated image data. Meanwhile, scene graphs represent relational knowledge that can be modeled with symbolic data from texts or knowledge graphs. While image annotation demands extensive labor, collecting textual descriptions of natural scenes requires less effort. In this work, we investigate whether textual scene descriptions can substitute for annotated image data. To this end, we employ a scene graph classification framework that is trained not only from annotated images but also from symbolic data. In our architecture, the symbolic entities are first mapped to their correspondent image-grounded representations and then fed into the relational reasoning pipeline. Even though a structured form of knowledge, such as the form in knowledge graphs, is not always available, we can generate it from unstructured texts using a transformer-based language model. We show that by fine-tuning the classification pipeline with the extracted knowledge from texts, we can achieve ~8x more accurate results in scene graph classification, ~3x in object classification, and ~1.5x in predicate classification, compared to the supervised baselines with only 1% of the annotated images.
MCML Authors
15.02.2022
Related

10.07.2025
Beyond Prediction: How Causal AI Enables Better Decision-Making - With Stefan Feuerriegel
Stefan Feuerriegel in our new film shows how Causal AI helps pick better actions by predicting outcomes for each possible decision.

10.07.2025
MCML Researchers With 22 Papers at ICML 2025
42nd International Conference on Machine Learning (ICML 2025). Vancouver, Canada, 13.07.2025 - 19.07.2025

06.07.2025
How Neural Networks Are Changing Medical Imaging – With Reinhard Heckel
In the new research film, Reinhard Heckel shows how AI enables sharper heart imaging from limited or noisy data.

25.06.2025
When Clinical Expertise Meets AI Innovation – With Michael Ingrisch
The new research film features Michael Ingrisch, who shows how AI and clinical expertise can solve real challenges in radiology together.