Home  | Publications | FMD+26

Steering MoE LLMs via Expert (De)Activation

MCML Authors

Link to Profile Hinrich Schütze PI Matchmaking

Hinrich Schütze

Prof. Dr.

Principal Investigator

Abstract

Mixture-of-Experts (MoE) in Large Language Models (LLMs) routes each token through a subset of specialized Feed-Forward Networks (FFN), known as experts. We present SteerMoE, a framework to steer MoE models by detecting and controlling behavior-associated experts. We detect key experts by comparing how often they activate between paired inputs that demonstrate opposite behaviors (e.g., safe vs. unsafe). By selectively activating or deactivating such experts during inference, we control behaviors like faithfulness and safety without fine-tuning. Across 11 benchmarks and 6 LLMs, our steering raises safety by up to +20% and faithfulness by +27%. Alternatively, unsafe steering drops safety by -41% alone, and -100% when combined with existing jailbreak methods, bypassing all safety guardrails. Overall, SteerMoE offers a lightweight, effective, and widely applicable test-time control, while revealing unique vulnerabilities in MoE LLMs.

inproceedings FMD+26


ICLR 2026

14th International Conference on Learning Representations. Rio de Janeiro, Brazil, Apr 23-27, 2026. To be published. Preprint available.
Conference logo
A* Conference

Authors

M. Fayyaz • A. Modarressi • H. Deilamsalehy • F. Dernoncourt • R. Rossi • T. Bui • H. Schütze • N. Peng

Links

arXiv GitHub

In Collaboration

 Adobe


Research Area

 B2 | Natural Language Processing

BibTeXKey: FMD+26

Back to Top