Home  | Publications | Lad26

On Convex and Non-Convex Abstraction-Refinement Techniques Guaranteeing Safety of Artificial Intelligence

MCML Authors

Abstract

Artificial intelligence is increasingly used in safety-critical systems, requiring formal guarantees to prevent harm. This thesis addresses the hard problem of formally verifying neural networks under uncertain inputs via reachability-based abstractions and refinement. Four techniques improve precision, scalability, and usability, enabling verification by non-experts. Experiments confirm effectiveness, including during training and for interpretability.

phdthesis Lad26


Dissertation

TU München. Mar. 2026

Authors

T. Ladner

Links

URL

Research Area

 B3 | Multimodal Perception

BibTeXKey: Lad26

Back to Top