Home  | Publications | BLJ+25

A Projection-Based Framework for Gradient-Free and Parallel Learning

MCML Authors

Link to Profile Stefanie Jegelka PI Matchmaking

Stefanie Jegelka

Prof. Dr.

Principal Investigator

Link to Profile Suvrit Sra PI Matchmaking

Suvrit Sra

Prof. Dr.

Principal Investigator

Abstract

We present a feasibility-seeking approach to neural network training. This mathematical optimization framework is distinct from conventional gradient-based loss minimization and uses projection operators and iterative projection algorithms. We reformulate training as a large-scale feasibility problem: finding network parameters and states that satisfy local constraints derived from its elementary operations. Training then involves projecting onto these constraints, a local operation that can be parallelized across the network. We introduce PJAX, a JAX-based software framework that enables this paradigm. PJAX composes projection operators for elementary operations, automatically deriving the solution operators for the feasibility problems (akin to autodiff for derivatives). It inherently supports GPU/TPU acceleration, provides a familiar NumPy-like API, and is extensible. We train diverse architectures (MLPs, CNNs, RNNs) on standard benchmarks using PJAX, demonstrating its functionality and generality. Our results show that this approach is as a compelling alternative to gradient-based training, with clear advantages in parallelism and the ability to handle non-differentiable operations.

misc


Preprint

Jun. 2025

Authors

A. Bergmeister • M. K. Lal • S. JegelkaS. Sra

Links


Research Areas

 A2 | Mathematical Foundations

 A3 | Computational Models

BibTeXKey: BLJ+25

Back to Top