15.02.2022

MCML Researchers With Two Papers at AAAI 2022
36th Conference on Artificial Intelligence (AAAI 2022). Virtual, 22.02.2022–01.03.2022
We are happy to announce that MCML researchers are represented with two papers at AAAI 2022. Congrats to our researchers!
Main Track (2 papers)
TLogic: Temporal logical rules for explainable link forecasting on temporal knowledge graphs.
AAAI 2022 - 36th Conference on Artificial Intelligence. Virtual, Feb 22-Mar 01, 2022. DOI
Abstract
Conventional static knowledge graphs model entities in relational data as nodes, connected by edges of specific relation types. However, information and knowledge evolve continuously, and temporal dynamics emerge, which are expected to influence future situations. In temporal knowledge graphs, time information is integrated into the graph by equipping each edge with a timestamp or a time range. Embedding-based methods have been introduced for link prediction on temporal knowledge graphs, but they mostly lack explainability and comprehensible reasoning chains. Particularly, they are usually not designed to deal with link forecasting – event prediction involving future timestamps. We address the task of link forecasting on temporal knowledge graphs and introduce TLogic, an explainable framework that is based on temporal logical rules extracted via temporal random walks. We compare TLogic with state-of-the-art baselines on three benchmark datasets and show better overall performance while our method also provides explanations that preserve time consistency. Furthermore, in contrast to most state-of-the-art embedding-based methods, TLogic works well in the inductive setting where already learned rules are transferred to related datasets with a common vocabulary.
MCML Authors
Improving Scene Graph Classification by Exploiting Knowledge from Texts.
AAAI 2022 - 36th Conference on Artificial Intelligence. Virtual, Feb 22-Mar 01, 2022. DOI
Abstract
Training scene graph classification models requires a large amount of annotated image data. Meanwhile, scene graphs represent relational knowledge that can be modeled with symbolic data from texts or knowledge graphs. While image annotation demands extensive labor, collecting textual descriptions of natural scenes requires less effort. In this work, we investigate whether textual scene descriptions can substitute for annotated image data. To this end, we employ a scene graph classification framework that is trained not only from annotated images but also from symbolic data. In our architecture, the symbolic entities are first mapped to their correspondent image-grounded representations and then fed into the relational reasoning pipeline. Even though a structured form of knowledge, such as the form in knowledge graphs, is not always available, we can generate it from unstructured texts using a transformer-based language model. We show that by fine-tuning the classification pipeline with the extracted knowledge from texts, we can achieve ~8x more accurate results in scene graph classification, ~3x in object classification, and ~1.5x in predicate classification, compared to the supervised baselines with only 1% of the annotated images.
MCML Authors
15.02.2022
Related

01.09.2025
AI for Personalized Psychiatry - With Researcher Clara Vetter
AI research by Clara Vetter uses brain, genetic and smartphone data to personalize psychiatry and improve diagnosis and treatment.

25.08.2025
Satellite Insights for a Sustainable Future - With Researcher Ivica Obadic
AI from satellite imagery helps design livable cities, improve well-being & food systems with transparent models by Ivica Obadić.


18.08.2025
Mingyang Wang Receives Award at ACL 2025
MCML Junior Member Mingyang Wang wins SAC Highlights Award at ACL 2025 for research on cross-lingual consistency in language models.

18.08.2025
Digital Twins for Surgery - With Researcher Azade Farshad
Azade Farshad develops patient digital twins at TUM & MCML to improve personalized treatment, surgical planning, and training.