Artificial intelligence is increasingly used in safety-critical systems, requiring formal guarantees to prevent harm. This thesis addresses the hard problem of formally verifying neural networks under uncertain inputs via reachability-based abstractions and refinement. Four techniques improve precision, scalability, and usability, enabling verification by non-experts. Experiments confirm effectiveness, including during training and for interpretability.
BibTeXKey: Lad26