Home | Research | Groups | Steffen Schneider

Research Group Steffen Schneider


Link to website at TUM

Steffen Schneider

Dr.

Associate

Dynamical Inference

Steffen Schneider

leads the Dynamical Inference Lab at Helmholtz Munich.

He is working on machine learning algorithms for representation learning and inference of nonlinear system dynamics. His team applies these algorithms to model complex biological systems in neuroscience, cell biology and other life science applications.

Team members @MCML

PhD Students

Link to website

Rodrigo Gonzalez Laiz

Dynamical Inference

Link to website

Tobias Schmidt

Dynamical Inference

Recent News @MCML

Link to MCML Associate Steffen Schneider Is Young Scientist of the Year

01.02.2025

MCML Associate Steffen Schneider Is Young Scientist of the Year

Publications @MCML

2025


[2]
R. G. Laiz, T. Schmidt and S. Schneider.
Self-supervised contrastive learning performs non-linear system identification.
ICLR 2025 - 13th International Conference on Learning Representations. Singapore, Apr 24-28, 2025. To be published. Preprint available. URL
Abstract

Self-supervised learning (SSL) approaches have brought tremendous success across many tasks and domains. It has been argued that these successes can be attributed to a link between SSL and identifiable representation learning: Temporal structure and auxiliary variables ensure that latent representations are related to the true underlying generative factors of the data. Here, we deepen this connection and show that SSL can perform system identification in latent space. We propose DynCL, a framework to uncover linear, switching linear and non-linear dynamics under a non-linear observation model, give theoretical guarantees and validate them empirically.

MCML Authors
Link to website

Tobias Schmidt

Dynamical Inference

Link to Profile Steffen Schneider

Steffen Schneider

Dr.

Dynamical Inference


[1]
H. Lim, J. Choi, J. Choo and S. Schneider.
Sparse autoencoders reveal selective remapping of visual concepts during adaptation.
ICLR 2025 - 13th International Conference on Learning Representations. Singapore, Apr 24-28, 2025. To be published. Preprint available. URL
Abstract

Adapting foundation models for specific purposes has become a standard approach to build machine learning systems for downstream applications. Yet, it is an open question which mechanisms take place during adaptation. Here we develop a new Sparse Autoencoder (SAE) for the CLIP vision transformer, named PatchSAE, to extract interpretable concepts at granular levels (e.g., shape, color, or semantics of an object) and their patch-wise spatial attributions. We explore how these concepts influence the model output in downstream image classification tasks and investigate how recent state-of-the-art prompt-based adaptation techniques change the association of model inputs to these concepts. While activations of concepts slightly change between adapted and non-adapted models, we find that the majority of gains on common adaptation tasks can be explained with the existing concepts already present in the non-adapted foundation model. This work provides a concrete framework to train and use SAEs for Vision Transformers and provides insights into explaining adaptation mechanisms.

MCML Authors
Link to Profile Steffen Schneider

Steffen Schneider

Dr.

Dynamical Inference