Home  | Publications | LLZ+26

High-Rank Structured Modulation for Parameter-Efficient Fine-Tuning

MCML Authors

Link to Profile Hinrich Schütze PI Matchmaking

Hinrich Schütze

Prof. Dr.

Principal Investigator

Abstract

As the number of model parameters increases, parameter-efficient fine-tuning (PEFT) has become the go-to choice for tailoring pre-trained large language models. Low-rank Adaptation (LoRA) uses a low-rank update method to simulate full parameter fine-tuning, which is widely used to reduce resource requirements. However, decreasing the rank encounters challenges with limited representational capacity when compared to full parameter fine-tuning. We present textbf{SMoA}, a high-rank textbf{S}tructured textbf{MO}dulation textbf{A}dapter that uses fewer trainable parameters while maintaining a higher rank, thereby improving the model's representational capacity and offering improved performance potential. The core idea is to freeze the original pretrained weights and selectively amplify or suppress important features of the original weights across multiple subspaces. The subspace mechanism provides an efficient way to increase the capacity and complexity of a model. We conduct both theoretical analyses and empirical studies on various tasks. Experiment results show that SMoA outperforms LoRA and its variants on 10 tasks, with extensive ablation studies validating its effectiveness.

misc LLZ+26


Preprint

Jan. 2026

Authors

Y. Liu • X. Li • M. Zhao • S. Zhang • Z. Wang • Q. Li • S. Feng • F. Ren • D. Wang • H. Schütze

Links

arXiv

Research Area

 B2 | Natural Language Processing

BibTeXKey: LLZ+26

Back to Top