Home  | Publications | SJ25

Learning Equivariant Models by Discovering Symmetries With Learnable Augmentations

MCML Authors

Link to Profile Stefanie Jegelka PI Matchmaking

Stefanie Jegelka

Prof. Dr.

Principal Investigator

Abstract

Recently, a trend has emerged that favors learning relevant symmetries from data in geometric domains instead of designing constrained architectures. To do so, two popular options are (1) to modify the training protocol, e.g., with a specific loss and data augmentations (soft equivariance), or (2) to ignore equivariance and infer it only implicitly. However, both options have limitations: soft equivariance requires a priori knowledge about relevant symmetries, while inferring symmetries merely via the task and larger data lacks interpretability. To address both limitations, we propose SEMoLA, an end-to-end approach that jointly (1) discovers a priori unknown symmetries in the data via learnable data augmentations, and (2) softly encodes the respective approximate equivariance into an arbitrary unconstrained model. Hence, it does not need prior knowledge about symmetries, it offers interpretability, and it maintains robustness to distribution shifts. Empirically, we demonstrate the ability of SEMoLA to robustly discover relevant symmetries while achieving high prediction accuracy across various datasets, encompassing multiple data modalities and underlying symmetry groups.

misc


Preprint

Jun. 2025

Authors

E. S. Escriche • S. Jegelka

Links


Research Area

 A3 | Computational Models

BibTeXKey: SJ25

Back to Top