Home  | Publications | TJ25

Generalization Bounds for Canonicalization: A Comparative Study With Group Averaging

MCML Authors

Link to Profile Stefanie Jegelka PI Matchmaking

Stefanie Jegelka

Prof. Dr.

Principal Investigator

Abstract

Canonicalization, a popular method for generating invariant or equivariant function classes from arbitrary function sets, involves initial data projection onto a reduced input space subset, followed by applying any learning method to the projected dataset. Despite recent research on the expressive power and continuity of functions represented by canonicalization, its generalization capabilities remain less explored. This paper addresses this gap by theoretically examining the generalization benefits and sample complexity of canonicalization, comparing them with group averaging, another popular technique for creating invariant or equivariant function classes. Our findings reveal two distinct regimes where canonicalization may outperform or underperform compared to group averaging, with precise quantification of this phase transition in terms of sample size, group action characteristics, and a newly introduced concept of alignment. To the best of our knowledge, this study represents the first theoretical exploration of such behavior, offering insights into the relative effectiveness of canonicalization and group averaging under varying conditions.

inproceedings


ICLR 2025

13th International Conference on Learning Representations. Singapore, Apr 24-28, 2025.
Conference logo
A* Conference

Authors

B. Tahmasebi • S. Jegelka

Links

URL

Research Area

 A3 | Computational Models

BibTeXKey: TJ25

Back to Top