Home | Research | Groups | Gitta Kutyniok

Research Group Gitta Kutyniok

Link to Gitta Kutyniok

Gitta Kutyniok

Prof. Dr.

Mathematical Foundations of Artificial Intelligence

A2 | Mathematical Foundations

Gitta Kutyniok

holds the Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence at LMU Munich.

The chair's research focus on the intersection of mathematics and artificial intelligence, aiming for both a mathematical understanding of artificial intelligence and artificial intelligence for mathematical problems.

Team members @MCML

Link to Vit Fojtik

Vit Fojtik

Mathematical Foundations of Artificial Intelligence

A2 | Mathematical Foundations

Link to Maria Matveev

Maria Matveev

Mathematical Foundations of Artificial Intelligence

Junior Representative

A2 | Mathematical Foundations

Link to Raffaele Paolino

Raffaele Paolino

Mathematical Foundations of Artificial Intelligence

A2 | Mathematical Foundations

Publications @MCML

[13]
R. Paolino, S. Maskey, P. Welke and G. Kutyniok.
Weisfeiler and Leman Go Loopy: A New Hierarchy for Graph Representational Learning.
38th Conference on Neural Information Processing Systems (NeurIPS 2024). Vancouver, Canada, Dec 10-15, 2024. To be published. Preprint at arXiv. arXiv. GitHub.
Abstract

We introduce r-loopy Weisfeiler-Leman (r-ℓWL), a novel hierarchy of graph isomorphism tests and a corresponding GNN framework, r-ℓMPNN, that can count cycles up to length r+2. Most notably, we show that r-ℓWL can count homomorphisms of cactus graphs. This strictly extends classical 1-WL, which can only count homomorphisms of trees and, in fact, is incomparable to k-WL for any fixed k. We empirically validate the expressive and counting power of the proposed r-ℓMPNN on several synthetic datasets and present state-of-the-art predictive performance on various real-world datasets.

MCML Authors
Link to Raffaele Paolino

Raffaele Paolino

Mathematical Foundations of Artificial Intelligence

A2 | Mathematical Foundations

Link to Gitta Kutyniok

Gitta Kutyniok

Prof. Dr.

Mathematical Foundations of Artificial Intelligence

A2 | Mathematical Foundations


[12]
P. Scholl, M. Iskandar, S. Wolf, J. Lee, A. Bacho, A. Dietrich, A. Albu-Schäffer and G. Kutyniok.
Learning-based adaption of robotic friction models.
Robotics and Computer-Integrated Manufacturing 89 (Oct. 2024). DOI.
MCML Authors
Link to Gitta Kutyniok

Gitta Kutyniok

Prof. Dr.

Mathematical Foundations of Artificial Intelligence

A2 | Mathematical Foundations


[11]
Y. Lee, H. Boche and G. Kutyniok.
Computability of Optimizers.
IEEE Transactions on Information Theory 70.4 (Apr. 2024). DOI.
MCML Authors
Link to Gitta Kutyniok

Gitta Kutyniok

Prof. Dr.

Mathematical Foundations of Artificial Intelligence

A2 | Mathematical Foundations


[10]
H. Boch, A. Fono and G. Kutyniok.
Mathematical Algorithm Design for Deep Learning under Societal and Judicial Constraints: The Algorithmic Transparency Requirement.
Preprint at arXiv (Jan. 2024). arXiv.
Abstract

Deep learning still has drawbacks in terms of trustworthiness, which describes a comprehensible, fair, safe, and reliable method. To mitigate the potential risk of AI, clear obligations associated to trustworthiness have been proposed via regulatory guidelines, e.g., in the European AI Act. Therefore, a central question is to what extent trustworthy deep learning can be realized. Establishing the described properties constituting trustworthiness requires that the factors influencing an algorithmic computation can be retraced, i.e., the algorithmic implementation is transparent. Motivated by the observation that the current evolution of deep learning models necessitates a change in computing technology, we derive a mathematical framework which enables us to analyze whether a transparent implementation in a computing model is feasible. We exemplarily apply our trustworthiness framework to analyze deep learning approaches for inverse problems in digital and analog computing models represented by Turing and Blum-Shub-Smale Machines, respectively. Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems under fairly general conditions, whereas Turing machines cannot guarantee trustworthiness to the same degree.

MCML Authors
Link to Gitta Kutyniok

Gitta Kutyniok

Prof. Dr.

Mathematical Foundations of Artificial Intelligence

A2 | Mathematical Foundations


[9]
M. Singh, A. Fono and G. Kutyniok.
Expressivity of Spiking Neural Networks through the Spike Response Model.
1st Workshop on Unifying Representations in Neural Models (UniReps 2023) at the 37th Conference on Neural Information Processing Systems (NeurIPS 2023). New Orleans, LA, USA, Dec 10-16, 2023. URL.
MCML Authors
Link to Gitta Kutyniok

Gitta Kutyniok

Prof. Dr.

Mathematical Foundations of Artificial Intelligence

A2 | Mathematical Foundations


[8]
S. Maskey, R. Paolino, A. Bacho and G. Kutyniok.
A Fractional Graph Laplacian Approach to Oversmoothing.
37th Conference on Neural Information Processing Systems (NeurIPS 2023). New Orleans, LA, USA, Dec 10-16, 2023. URL. GitHub.
MCML Authors
Link to Raffaele Paolino

Raffaele Paolino

Mathematical Foundations of Artificial Intelligence

A2 | Mathematical Foundations

Link to Gitta Kutyniok

Gitta Kutyniok

Prof. Dr.

Mathematical Foundations of Artificial Intelligence

A2 | Mathematical Foundations


[7]
H. Boche, A. Fono and G. Kutyniok.
Limitations of Deep Learning for Inverse Problems on Digital Hardware.
IEEE Transactions on Information Theory 69.12 (Dec. 2023). DOI.
MCML Authors
Link to Gitta Kutyniok

Gitta Kutyniok

Prof. Dr.

Mathematical Foundations of Artificial Intelligence

A2 | Mathematical Foundations


[6]
P. Scholl, K. Bieker, H. Hauger and G. Kutyniok.
ParFam -- Symbolic Regression Based on Continuous Global Optimization.
Preprint at arXiv (Oct. 2023). arXiv.
MCML Authors
Link to Gitta Kutyniok

Gitta Kutyniok

Prof. Dr.

Mathematical Foundations of Artificial Intelligence

A2 | Mathematical Foundations


[5]
Ç. Yapar, F. Jaensch, L. Ron, G. Kutyniok and G. Caire.
Overview of the Urban Wireless Localization Competition.
IEEE Workshop on Machine Learning for Signal Processing (MLSP 2023). Rome, Italy, Sep 17-20, 2023. DOI.
MCML Authors
Link to Gitta Kutyniok

Gitta Kutyniok

Prof. Dr.

Mathematical Foundations of Artificial Intelligence

A2 | Mathematical Foundations


[4]
G. Kutyniok.
An introduction to the mathematics of deep learning.
European Congress of Mathematics (Jul. 2023). DOI.
MCML Authors
Link to Gitta Kutyniok

Gitta Kutyniok

Prof. Dr.

Mathematical Foundations of Artificial Intelligence

A2 | Mathematical Foundations


[3]
A. Bacho, H. Boche and G. Kutyniok.
Reliable AI: Does the Next Generation Require Quantum Computing?.
Preprint at arXiv (Jul. 2023). arXiv.
MCML Authors
Link to Gitta Kutyniok

Gitta Kutyniok

Prof. Dr.

Mathematical Foundations of Artificial Intelligence

A2 | Mathematical Foundations


[2]
Ç. Yapar, F. Jaensch, R. Levie, G. Kutyniok and G. Caire.
The First Pathloss Radio Map Prediction Challenge.
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2023). Rhode Island, Greece, Jun 04-10, 2023. DOI.
MCML Authors
Link to Gitta Kutyniok

Gitta Kutyniok

Prof. Dr.

Mathematical Foundations of Artificial Intelligence

A2 | Mathematical Foundations


[1]
R. Paolino, A. Bojchevski, S. Günnemann, G. Kutyniok and R. Levie.
Unveiling the Sampling Density in Non-Uniform Geometric Graphs.
11th International Conference on Learning Representations (ICLR 2023). Kigali, Rwanda, May 01-05, 2023. URL.
Abstract

A powerful framework for studying graphs is to consider them as geometric graphs: nodes are randomly sampled from an underlying metric space, and any pair of nodes is connected if their distance is less than a specified neighborhood radius. Currently, the literature mostly focuses on uniform sampling and constant neighborhood radius. However, real-world graphs are likely to be better represented by a model in which the sampling density and the neighborhood radius can both vary over the latent space. For instance, in a social network communities can be modeled as densely sampled areas, and hubs as nodes with larger neighborhood radius. In this work, we first perform a rigorous mathematical analysis of this (more general) class of models, including derivations of the resulting graph shift operators. The key insight is that graph shift operators should be corrected in order to avoid potential distortions introduced by the non-uniform sampling. Then, we develop methods to estimate the unknown sampling density in a self-supervised fashion. Finally, we present exemplary applications in which the learnt density is used to 1) correct the graph shift operator and improve performance on a variety of tasks, 2) improve pooling, and 3) extract knowledge from networks. Our experimental findings support our theory and provide strong evidence for our model.

MCML Authors
Link to Raffaele Paolino

Raffaele Paolino

Mathematical Foundations of Artificial Intelligence

A2 | Mathematical Foundations

Link to Stephan Günnemann

Stephan Günnemann

Prof. Dr.

Data Analytics & Machine Learning

A3 | Computational Models

Link to Gitta Kutyniok

Gitta Kutyniok

Prof. Dr.

Mathematical Foundations of Artificial Intelligence

A2 | Mathematical Foundations