ModelStar: Reachability Analysis-based Safety Verification of Neural Networks Against Model Perturbations

Main Article Content

Muhammad Usama Zubair
Taylor T. Johnson
Kanad Basu
Waseem Abbas

Abstract

The widespread adoption of deep neural network (DNN)-based learning systems in safety-critical applications requires exceptional reliability. However, this reliability could be compromised by perturbations in model parameters, such as variations in neural network weights caused by hardware vulnerabilities and environmental factors, which can lead to mispredictions and compromise system safety. To address this, we propose ‘ModelStar’, an innovative framework leveraging reachability analysis to evaluate the robustness of DNNs against weight perturbations. ModelStar employs a linear set propagation technique to analyze the impact of an infinite family of parameter variations on DNN outputs. Our comprehensive analysis demonstrates that ModelStar not only establishes tighter robustness bounds but also verifies DNN robustness for up to 60% more samples from image classification datasets compared to existing methods. Furthermore, ModelStar extends safety verification to convolutional layers, advancing the state-of-the-art in neural network safety verification. These results highlight ModelStar’s efficacy in improving the reliability of DNNs in real-world, safety-critical scenarios.

Article Details

Section
Articles