Home  | Publications | LA23

Automatic Abstraction Refinement in Neural Network Verification Using Sensitivity Analysis

MCML Authors

Link to Profile Matthias Althoff

Matthias Althoff

Prof. Dr.

Principal Investigator

Abstract

The formal verification of neural networks is essential for their application in safety-critical environments. However, the set-based verification of neural networks using linear approximations often obtains overly conservative results, while nonlinear approximations quickly become computationally infeasible in deep neural networks. We address this issue for the first time by automatically balancing between precision and computation time without splitting the propagated set. Our work introduces a novel automatic abstraction refinement approach using sensitivity analysis to iteratively reduce the abstraction error at the neuron level until either the specifications are met or a maximum number of iterations is reached. Our evaluation shows that we can tightly over-approximate the output sets of deep neural networks and that our approach is up to a thousand times faster than a naive approach. We further demonstrate the applicability of our approach in closed-loop settings.

inproceedings


HSCC 2023

26th ACM International Conference on Hybrid Systems: Computation and Control. San Antonio, TX, USA, May 09-12, 2023.

Authors

T. LadnerM. Althoff

Links

DOI

Research Area

 B3 | Multimodal Perception

BibTeXKey: LA23

Back to Top