Home | Research | Groups | Stephan Günnemann

Research Group Stephan Günnemann

Link to Stephan Günnemann

Stephan Günnemann

Prof. Dr.

Data Analytics & Machine Learning

A3 | Computational Models

Stephan Günnemann

is Professor of Data Analytics and Machine Learning at TU Munich.

He conducts research in the area of machine learning and data analytics. His main research focuses on how to make machine learning techniques reliable, thus, enabling their safe and robust use in various application domains. He is particularly interested in studying machine learning methods targeting complex data domains such as graphs/networks and temporal data.

Team members @MCML

Link to Lukas Gosch

Lukas Gosch

Data Analytics & Machine Learning

A3 | Computational Models

Link to Marcel Kollovieh

Marcel Kollovieh

Data Analytics & Machine Learning

A3 | Computational Models

Publications @MCML

[16]
J. G. Wiese, L. Wimmer, T. Papamarkou, B. Bischl, S. Günnemann and D. Rügamer.
Towards Efficient Posterior Sampling in Deep Neural Networks via Symmetry Removal (Extended Abstract).
33rd International Joint Conference on Artificial Intelligence (IJCAI 2024). Jeju, Korea, Aug 03-09, 2024. DOI.
Abstract

Bayesian inference in deep neural networks is challenging due to the high-dimensional, strongly multi-modal parameter posterior density landscape. Markov chain Monte Carlo approaches asymptotically recover the true posterior but are considered prohibitively expensive for large modern architectures. Local methods, which have emerged as a popular alternative, focus on specific parameter regions that can be approximated by functions with tractable integrals. While these often yield satisfactory empirical results, they fail, by definition, to account for the multi-modality of the parameter posterior. In this work, we argue that the dilemma between exact-but-unaffordable and cheap-but-inexact approaches can be mitigated by exploiting symmetries in the posterior landscape. Such symmetries, induced by neuron interchangeability and certain activation functions, manifest in different parameter values leading to the same functional output value. We show theoretically that the posterior predictive density in Bayesian neural networks can be restricted to a symmetry-free parameter reference set. By further deriving an upper bound on the number of Monte Carlo chains required to capture the functional diversity, we propose a straightforward approach for feasible Bayesian inference. Our experiments suggest that efficient sampling is indeed possible, opening up a promising path to accurate uncertainty quantification in deep learning.

MCML Authors
Link to Lisa Wimmer

Lisa Wimmer

Statistical Learning & Data Science

A1 | Statistical Foundations & Explainability

Link to Bernd Bischl

Bernd Bischl

Prof. Dr.

Statistical Learning & Data Science

A1 | Statistical Foundations & Explainability

Link to Stephan Günnemann

Stephan Günnemann

Prof. Dr.

Data Analytics & Machine Learning

A3 | Computational Models

Link to David Rügamer

David Rügamer

Prof. Dr.

Data Science Group

A1 | Statistical Foundations & Explainability


[15]
J. G. Wiese, L. Wimmer, T. Papamarkou, B. Bischl, S. Günnemann and D. Rügamer.
Towards Efficient MCMC Sampling in Bayesian Neural Networks by Exploiting Symmetry.
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2023). Turin, Italy, Sep 18-22, 2023. Best paper award. DOI.
MCML Authors
Link to Lisa Wimmer

Lisa Wimmer

Statistical Learning & Data Science

A1 | Statistical Foundations & Explainability

Link to Bernd Bischl

Bernd Bischl

Prof. Dr.

Statistical Learning & Data Science

A1 | Statistical Foundations & Explainability

Link to Stephan Günnemann

Stephan Günnemann

Prof. Dr.

Data Analytics & Machine Learning

A3 | Computational Models

Link to David Rügamer

David Rügamer

Prof. Dr.

Data Science Group

A1 | Statistical Foundations & Explainability


[14]
R. Paolino, A. Bojchevski, S. Günnemann, G. Kutyniok and R. Levie.
Unveiling the Sampling Density in Non-Uniform Geometric Graphs.
11th International Conference on Learning Representations (ICLR 2023). Kigali, Rwanda, May 01-05, 2023. URL.
Abstract

A powerful framework for studying graphs is to consider them as geometric graphs: nodes are randomly sampled from an underlying metric space, and any pair of nodes is connected if their distance is less than a specified neighborhood radius. Currently, the literature mostly focuses on uniform sampling and constant neighborhood radius. However, real-world graphs are likely to be better represented by a model in which the sampling density and the neighborhood radius can both vary over the latent space. For instance, in a social network communities can be modeled as densely sampled areas, and hubs as nodes with larger neighborhood radius. In this work, we first perform a rigorous mathematical analysis of this (more general) class of models, including derivations of the resulting graph shift operators. The key insight is that graph shift operators should be corrected in order to avoid potential distortions introduced by the non-uniform sampling. Then, we develop methods to estimate the unknown sampling density in a self-supervised fashion. Finally, we present exemplary applications in which the learnt density is used to 1) correct the graph shift operator and improve performance on a variety of tasks, 2) improve pooling, and 3) extract knowledge from networks. Our experimental findings support our theory and provide strong evidence for our model.

MCML Authors
Link to Raffaele Paolino

Raffaele Paolino

Mathematical Foundations of Artificial Intelligence

A2 | Mathematical Foundations

Link to Stephan Günnemann

Stephan Günnemann

Prof. Dr.

Data Analytics & Machine Learning

A3 | Computational Models

Link to Gitta Kutyniok

Gitta Kutyniok

Prof. Dr.

Mathematical Foundations of Artificial Intelligence

A2 | Mathematical Foundations


[13]
L. Hetzel, S. Boehm, N. Kilbertus, S. Günnemann, M. Lotfollahi and F. J. Theis.
Predicting Cellular Responses to Novel Drug Perturbations at a Single-Cell Resolution.
36th Conference on Neural Information Processing Systems (NeurIPS 2022). New Orleans, LA, USA, Nov 28-Dec 09, 2022. PDF.
MCML Authors
Link to Leon Hetzel

Leon Hetzel

Mathematical Modelling of Biological Systems

C2 | Biology

Link to Niki Kilbertus

Niki Kilbertus

Prof. Dr.

Ethics in Systems Design and Machine Learning

A3 | Computational Models

Link to Stephan Günnemann

Stephan Günnemann

Prof. Dr.

Data Analytics & Machine Learning

A3 | Computational Models

Link to Fabian Theis

Fabian Theis

Prof. Dr.

Mathematical Modelling of Biological Systems

C2 | Biology


[12]
Y. Scholten, J. Schuchardt, S. Geisler, A. Bojchevski and S. Günnemann.
Randomized Message-Interception Smoothing: Gray-box Certificates for Graph Neural Networks.
36th Conference on Neural Information Processing Systems (NeurIPS 2022). New Orleans, LA, USA, Nov 28-Dec 09, 2022. PDF.
MCML Authors
Link to Stephan Günnemann

Stephan Günnemann

Prof. Dr.

Data Analytics & Machine Learning

A3 | Computational Models


[11]
M. Biloš and S. Günnemann.
Scalable Normalizing Flows for Permutation Invariant Densities.
38th International Conference on Machine Learning (ICML 2021). Virtual, Jul 18-24, 2021. URL.
MCML Authors
Link to Stephan Günnemann

Stephan Günnemann

Prof. Dr.

Data Analytics & Machine Learning

A3 | Computational Models


[10]
J. Schuchardt, A. Bojchevski, J. Klicpera and S. Günnemann.
Collective Robustness Certificates - Exploiting Interdependence in Graph Neural Networks.
9th International Conference on Learning Representations (ICLR 2021). Virtual, May 03-07, 2021. URL.
MCML Authors
Link to Stephan Günnemann

Stephan Günnemann

Prof. Dr.

Data Analytics & Machine Learning

A3 | Computational Models


[9]
S. Geisler, D. Zügner and S. Günnemann.
Reliable Graph Neural Networks via Robust Aggregation.
34th Conference on Neural Information Processing Systems (NeurIPS 2020). Virtual, Dec 06-12, 2020. PDF.
MCML Authors
Link to Daniel Zügner

Daniel Zügner

Dr.

* Former member

A3 | Computational Models

Link to Stephan Günnemann

Stephan Günnemann

Prof. Dr.

Data Analytics & Machine Learning

A3 | Computational Models


[8]
O. Shchur, N. Gao, M. Biloš and S. Günnemann.
Fast and Flexible Temporal Point Processes with Triangular Maps.
34th Conference on Neural Information Processing Systems (NeurIPS 2020). Virtual, Dec 06-12, 2020. PDF.
MCML Authors
Link to Oleksandr Shchur

Oleksandr Shchur

Dr.

* Former member

A3 | Computational Models

Link to Stephan Günnemann

Stephan Günnemann

Prof. Dr.

Data Analytics & Machine Learning

A3 | Computational Models


[7]
D. Zügner and S. Günnemann.
Certifiable Robustness of Graph Convolutional Networks under Structure Perturbation.
26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2020). San Diego, California, USA, Aug 23-27, 2020. DOI.
MCML Authors
Link to Daniel Zügner

Daniel Zügner

Dr.

* Former member

A3 | Computational Models

Link to Stephan Günnemann

Stephan Günnemann

Prof. Dr.

Data Analytics & Machine Learning

A3 | Computational Models


[6]
J. Klicpera, J. Groß and S. Günnemann.
Directional Message Passing for Molecular Graphs.
8th International Conference on Learning Representations (ICLR 2020). Virtual, Apr 26-May 01, 2020. URL.
MCML Authors
Link to Stephan Günnemann

Stephan Günnemann

Prof. Dr.

Data Analytics & Machine Learning

A3 | Computational Models


[5]
O. Shchur, M. Biloš and S. Günnemann.
Intensity-Free Learning of Temporal Point Processes (selected for spotlight presentation).
8th International Conference on Learning Representations (ICLR 2020). Virtual, Apr 26-May 01, 2020. URL.
MCML Authors
Link to Oleksandr Shchur

Oleksandr Shchur

Dr.

* Former member

A3 | Computational Models

Link to Stephan Günnemann

Stephan Günnemann

Prof. Dr.

Data Analytics & Machine Learning

A3 | Computational Models


[4]
M. Biloš, B. Charpentier and S. Günnemann.
Uncertainty on Asynchronous Time Event Prediction (Poster).
33rd Conference on Neural Information Processing Systems (NeurIPS 2019). Vancouver, Canada, Dec 08-14, 2019. PDF.
MCML Authors
Link to Stephan Günnemann

Stephan Günnemann

Prof. Dr.

Data Analytics & Machine Learning

A3 | Computational Models


[3]
A. Bojchevski and S. Günnemann.
Certifiable Robustness to Graph Perturbations.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019). Vancouver, Canada, Dec 08-14, 2019. PDF.
MCML Authors
Link to Stephan Günnemann

Stephan Günnemann

Prof. Dr.

Data Analytics & Machine Learning

A3 | Computational Models


[2]
J. Klicpera, S. Weißenberger and S. Günnemann.
Diffusion Improves Graph Learning.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019). Vancouver, Canada, Dec 08-14, 2019. PDF.
MCML Authors
Link to Stephan Günnemann

Stephan Günnemann

Prof. Dr.

Data Analytics & Machine Learning

A3 | Computational Models


[1]
A. Bojchevski and S. Günnemann.
Adversarial Attacks on Node Embeddings via Graph Poisoning.
36th International Conference on Machine Learning (ICML 2019). Long Beach, CA, USA, Jun 09-15, 2019. URL.
MCML Authors
Link to Stephan Günnemann

Stephan Günnemann

Prof. Dr.

Data Analytics & Machine Learning

A3 | Computational Models