Home  | Publications | WMK+25

Leveraging Analytic Gradients in Provably Safe Reinforcement Learning

MCML Authors

Link to Profile Matthias Althoff

Matthias Althoff

Prof. Dr.

Principal Investigator

Abstract

The deployment of autonomous robots in safety-critical applications requires safety guarantees. Provably safe reinforcement learning is an active field of research that aims to provide such guarantees using safeguards. These safeguards should be integrated during training to reduce the sim-to-real gap. While there are several approaches for safeguarding sampling-based reinforcement learning, analytic gradient-based reinforcement learning often achieves superior performance from fewer environment interactions. However, there is no safeguarding approach for this learning paradigm yet. Our work addresses this gap by developing the first effective safeguard for analytic gradient-based reinforcement learning. We analyse existing, differentiable safeguards, adapt them through modified mappings and gradient formulations, and integrate them into a state-of-the-art learning algorithm and a differentiable simulation. Using numerical experiments on three control tasks, we evaluate how different safeguards affect learning. The results demonstrate safeguarded training without compromising performance.

article


IEEE Open Journal of Control Systems

Early Access. Sep. 2025.

Authors

T. Walter • H. Markgraf • J. KülzM. Althoff

Links

DOI

Research Area

 B3 | Multimodal Perception

BibTeXKey: WMK+25

Back to Top