Home | Research | Groups | Stefan Bauer

Research Group Stefan Bauer


Link to website at TUM

Stefan Bauer

Prof. Dr.

Principal Investigator

Algorithmic Machine Learning & Explainable AI

Stefan Bauer

is an Associate Professor of Algorithmic Machine Learning & Explainable AI at TU Munich and senior PI at Helmholtz AI.

He works on developing algorithms that learn causal relationships from high-dimensional inputs, explain their decisions, and adapt quickly to new problems. All these requirements are key prerequisites for robust and transformative AI-based technologies with various downstream applications.

Team members @MCML

PostDocs

Link to website

Andrea Dittadi

Dr.

Algorithmic Machine Learning & Explainable AI

PhD Students

Link to website

Emmanouil Angelis

Algorithmic Machine Learning & Explainable AI

Recent News @MCML

Link to MCML Researchers With 39 Papers in Highly-Ranked Journals

01.01.2025

MCML Researchers With 39 Papers in Highly-Ranked Journals

Link to MCML Researchers With 28 Papers at NeurIPS 2024

05.12.2024

MCML Researchers With 28 Papers at NeurIPS 2024

Link to Several MCML PIs Receive BMBF Funding

18.11.2024

Several MCML PIs Receive BMBF Funding

Publications @MCML

2025


[2]
T. Willem, V. A. Shitov, M. D. Luecken, N. Kilbertus, S. Bauer, M. Piraud, A. Buyx and F. J. Theis.
Biases in machine-learning models of human single-cell data.
Nature Cell Biology (Feb. 2025). DOI
Abstract

Recent machine-learning (ML)-based advances in single-cell data science have enabled the stratification of human tissue donors at single-cell resolution, promising to provide valuable diagnostic and prognostic insights. However, such insights are susceptible to biases. Here we discuss various biases that emerge along the pipeline of ML-based single-cell analysis, ranging from societal biases affecting whose samples are collected, to clinical and cohort biases that influence the generalizability of single-cell datasets, biases stemming from single-cell sequencing, ML biases specific to (weakly supervised or unsupervised) ML models trained on human single-cell samples and biases during the interpretation of results from ML models. We end by providing methods for single-cell data scientists to assess and mitigate biases, and call for efforts to address the root causes of biases.

MCML Authors
Link to Profile Niki Kilbertus

Niki Kilbertus

Prof. Dr.

Ethics in Systems Design and Machine Learning

Link to Profile Stefan Bauer

Stefan Bauer

Prof. Dr.

Algorithmic Machine Learning & Explainable AI


2024


[1]
B. M. G. Nielsen, L. Gresele and A. Dittadi.
Challenges in Explaining Representational Similarity through Identifiability.
UniReps @NeurIPS 2024 - 2nd Workshop on Unifying Representations in Neural Models at the 37th Conference on Neural Information Processing Systems (NeurIPS 2023). Vancouver, Canada, Dec 10-15, 2024. URL
Abstract

The phenomenon of different deep learning models producing similar data representations has garnered significant attention, raising the question of why such representational similarity occurs. Identifiability theory offers a partial explanation: for a broad class of discriminative models, including many popular in representation learning, those assigning equal likelihood to the observations yield representations that are equal up to a linear transformation, if a suitable diversity condition holds. In this work, we identify two key challenges in applying identifiability theory to explain representational similarity. First, the assumption of exact likelihood equality is rarely satisfied by practical models trained with different initializations. To address this, we describe how the representations of two models deviate from being linear transformations of each other, based on their difference in log-likelihoods. Second, we demonstrate that even models with similar and near-optimal loss values can produce highly dissimilar representations due to an underappreciated difference between loss and likelihood. Our findings highlight key open questions and point to future research directions for advancing the theoretical understanding of representational similarity.

MCML Authors
Link to website

Andrea Dittadi

Dr.

Algorithmic Machine Learning & Explainable AI