Home  | Research | Research Groups | Jegelka

Research Group Stefanie Jegelka


Link to website at TUM PI Matchmaking

Stefanie Jegelka

Prof. Dr.

Principal Investigator

Stefanie Jegelka

is a Humboldt Professor at TU Munich.

Her research is in algorithmic machine learning, and spans modeling, optimization algorithms, theory and applications. In particular, she has been working on exploiting mathematical structure for discrete and combinatorial machine learning problems, for robustness and for scaling machine learning algorithms.

Team members @MCML

PostDocs

Link to website

Ya-Wei Eileen Lin

PhD Students

Link to website

Andreas Bergmeister

Link to website

Vincent Bürgin

Link to website

Valerie Engelmayer

Link to website

Daniel Herbst

Link to website

Eduardo Santos Escriche

Recent News @MCML

Link to MCML at ICML 2025

MCML at ICML 2025

Link to MCML at CVPR 2025

MCML at CVPR 2025

Link to MCML at ICLR 2025

MCML at ICLR 2025

Link to MCML at NeurIPS 2024

MCML at NeurIPS 2024

Link to MCML at MICCAI 2024

MCML at MICCAI 2024

Publications @MCML

2025


[21]
X. Guo • R. Zhou • Y. Wang • Q. Zhang • C. Zhang • S. Jegelka • X. Wang • J. Chai • G. Yin • W. Lin • Y. Wang
SSL4RL: Revisiting Self-supervised Learning as Intrinsic Reward for Visual-Language Reasoning.
Preprint (Oct. 2025).

[20]
F. Kiwitt • B. Tahmasebi • S. Jegelka
Symmetries in Weight Space Learning: To Retain or Remove?
HiLD @ICML 2025 - Workshop on High-dimensional Learning Dynamics at the 42nd International Conference on Machine Learning. Vancouver, Canada, Jul 13-19, 2025. URL

[19] A* Conference
A. Soleymani • B. Tahmasebi • S. Jegelka • P. Jaillet
Learning with Exact Invariances in Polynomial Time.
ICML 2025 - 42nd International Conference on Machine Learning. Vancouver, Canada, Jul 13-19, 2025. URL

[18] A* Conference
T. DagèsS. WeberY.-W. E. Lin • R. Talmon • D. Cremers • M. Lindenbaum • A. M. Bruckstein • R. Kimmel
Finsler Multi-Dimensional Scaling: Manifold Learning for Asymmetric Dimensionality Reduction and Embedding.
CVPR 2025 - IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, TN, USA, Jun 11-15, 2025. DOI

[17]
A. Bergmeister • M. K. Lal • S. JegelkaS. Sra
A projection-based framework for gradient-free and parallel learning.
Preprint (Jun. 2025).

[16]
E. S. Escriche • S. Jegelka
Learning equivariant models by discovering symmetries with learnable augmentations.
Preprint (Jun. 2025).

[15]
X. Guo • A. Li • Y. Wang • S. Jegelka • Y. Wang
G1: Teaching LLMs to Reason on Graphs with Reinforcement Learning.
Preprint (May. 2025). GitHub

[14] A* Conference
L. Fang • Y. Wang • Z. Liu • C. Zhang • S. Jegelka • J. Gao • B. Ding • Y. Wang
What is Wrong with Perplexity for Long-context Language Modeling?
ICLR 2025 - 13th International Conference on Learning Representations. Singapore, Apr 24-28, 2025. URL GitHub

[13] A* Conference
L. Rauchwerger • S. Jegelka • R. Levie
Generalization, Expressivity, and Universality of Graph Neural Networks on Attributed Graphs.
ICLR 2025 - 13th International Conference on Learning Representations. Singapore, Apr 24-28, 2025. URL

[12] A* Conference
B. Tahmasebi • S. Jegelka
Generalization Bounds for Canonicalization: A Comparative Study with Group Averaging.
ICLR 2025 - 13th International Conference on Learning Representations. Singapore, Apr 24-28, 2025. URL

[11] A* Conference
Q. Zhang • Y. Wang • J. Cui • X. Pan • Q. Lei • S. Jegelka • Y. Wang
Beyond Interpretability: The Gains of Feature Monosemanticity on Model Robustness.
ICLR 2025 - 13th International Conference on Learning Representations. Singapore, Apr 24-28, 2025. URL

[10] A* Conference
D. HerbstS. Jegelka
Higher-Order Graphon Neural Networks: Approximation and Cut Distance.
ICLR 2025 - 13th International Conference on Learning Representations. Singapore, Apr 24-28, 2025. Spotlight Presentation. URL

2024


[9] A* Conference
G. Ma • Y. Wang • D. Lim • S. Jegelka • Y. Wang
A Canonicalization Perspective on Invariant and Equivariant Learning.
NeurIPS 2024 - 38th Conference on Neural Information Processing Systems. Vancouver, Canada, Dec 10-15, 2024. URL GitHub

[8] A* Conference
Y. Wang • K. Hu • S. Gupta • Z. Ye • Y. Wang • S. Jegelka
Understanding the Role of Equivariance in Self-supervised Learning.
NeurIPS 2024 - 38th Conference on Neural Information Processing Systems. Vancouver, Canada, Dec 10-15, 2024. URL GitHub

[7] A* Conference
Y. Wang • Y. Wu • Z. Wei • S. Jegelka • Y. Wang
A Theoretical Understanding of Self-Correction through In-context Alignment.
NeurIPS 2024 - 38th Conference on Neural Information Processing Systems. Vancouver, Canada, Dec 10-15, 2024. URL GitHub

[6] A* Conference
M. Yau • N. Karalias • E. Lu • J. Xu • S. Jegelka
Are Graph Neural Networks Optimal Approximation Algorithms?
NeurIPS 2024 - 38th Conference on Neural Information Processing Systems. Vancouver, Canada, Dec 10-15, 2024. URL

[5] A Conference
A. H. Berger • L. LuxN. StuckiV. Bürgin • S. Shit • A. Banaszaka • D. RückertU. Bauer • J. C. Paetzold
Topologically faithful multi-class segmentation in medical images.
MICCAI 2024 - 27th International Conference on Medical Image Computing and Computer Assisted Intervention. Marrakesh, Morocco, Oct 06-10, 2024. DOI

[4]
K. Gatmiry • Z. Li • S. J. Reddi • S. Jegelka
Simplicity Bias via Global Convergence of Sharpness Minimization.
Preprint (Oct. 2024).

[3]
K. Gatmiry • N. Saunshi • S. J. Reddi • S. Jegelka • S. Kumar
On the Role of Depth and Looping for In-Context Learning with Task Diversity.
Preprint (Oct. 2024).

[2]
T. Putterman • D. Lim • Y. Gelberg • S. Jegelka • H. Maron
Learning on LoRAs: GL-Equivariant Processing of Low-Rank Weight Spaces for Large Finetuned Models.
Preprint (Oct. 2024).

[1]
M. Yau • E. Akyürek • J. Mao • J. B. Tenenbaum • S. Jegelka • J. Andreas
Learning Linear Attention in Polynomial Time.
Preprint (Oct. 2024).