Home  | Publications | PLG+25

GL Equivariant Metanetworks for Learning on Low Rank Weight Spaces

MCML Authors

Link to Profile Stefanie Jegelka PI Matchmaking

Stefanie Jegelka

Prof. Dr.

Principal Investigator

Abstract

Low-rank adaptations (LoRAs) have revolutionized the finetuning of large foundation models, enabling efficient adaptation even with limited computational resources. The resulting proliferation of LoRAs together with the recent advances of weight-space learning present exciting opportunities for applying machine learning techniques that take these low-rank weights themselves as inputs. In this paper, we investigate the potential of Learning on LoRAs (LoL), a setup where machine learning models learn and make predictions on datasets of LoRA weights. Motivated by previous weight-space learning works, we first identify the inherent parameter symmetries of our data -- low-rank decompositions of weights -- which differ significantly from the parameter symmetries of standard neural networks. To efficiently process LoRA weights, we develop several symmetry-aware invariant or equivariant LoL models. In diverse experiments, we show that our LoL architectures can process LoRA weights to predict CLIP scores, finetuning data attributes, finetuning data membership, and accuracy on downstream tasks. We also show that LoL models trained on LoRAs of one pretrained model can effectively generalize to LoRAs trained on other models from the same model family. As an example of the utility of LoL, our LoL models can accurately estimate CLIP scores of diffusion models and ARC-C test accuracy of LLMs over 50,000 times faster than standard evaluation. As part of this work, we finetuned and will release datasets of more than ten thousand text-to-image diffusion-model and language-model LoRAs.

inproceedings PLG+25


LOG 2025

4th Learning on Graphs Conference. Phoenix, AZ, USA, Dec 10-12, 2025.

Authors

T. Putterman • D. Lim • Y. Gelberg • M. M. Bronstein • S. Jegelka • H. Maron

Links

URL

Research Area

 A3 | Computational Models

BibTeXKey: PLG+25

Back to Top