Home  | Publications | BRA+24

ETHER: Efficient Finetuning of Large-Scale Models With Hyperplane Reflections

MCML Authors

Link to Profile Zeynep Akata PI Matchmaking

Zeynep Akata

Prof. Dr.

Principal Investigator

Abstract

Parameter-efficient finetuning (PEFT) has become ubiquitous to adapt foundation models to downstream task requirements while retaining their generalization ability. However, the amount of additionally introduced parameters and compute for successful adaptation and hyperparameter searches can explode quickly, especially when deployed at scale to serve numerous individual requests. To ensure effective, parameter-efficient, and hyperparameter-robust adaptation, we propose the ETHER transformation family, which performs Efficient fineTuning via HypErplane Reflections. By design, ETHER transformations require a minimal number of parameters, are less likely to deteriorate model performance, and exhibit robustness to hyperparameter and learning rate choices. In particular, we introduce ETHER and its relaxation ETHER+, which match or outperform existing PEFT methods with significantly fewer parameters (∼10-100 times lower than LoRA or OFT) across multiple image synthesis and natural language tasks without exhaustive hyperparameter tuning. Finally, we investigate the recent emphasis on Hyperspherical Energy retention for adaptation and raise questions on its practical utility.

inproceedings


ICML 2024

41st International Conference on Machine Learning. Vienna, Austria, Jul 21-27, 2024.
Conference logo
A* Conference

Authors

M. Bini • K. Roth • Z. Akata • A. Khoreva

Links

URL GitHub

In Collaboration

partnerlogo

Research Area

 B1 | Computer Vision

BibTeXKey: BRA+24

Back to Top