Existing neural network verifiers compute a proof that each input is handled correctly under a given perturbation by propagating a symbolic abstraction of reachable values at each layer. This process is repeated from scratch independently for each input (e.g., image) and perturbation (e.g., rotation), leading to an expensive overall proof effort when handling an entire dataset. In this work we introduce a new method for reducing this verification cost without losing precision based on a key insight that abstractions obtained at intermediate layers for different in- puts and perturbations can overlap or contain each other. Leveraging our insight, we introduce the general concept of shared certificates, enabling proof effort reuse across multiple inputs to reduce overall verification costs. We perform an extensive experimental evaluation to demonstrate the effectiveness of shared certificates in reducing the verification cost on a range of datasets and attack specifications on image classifiers including the popular patch and geometric perturbations.
Shared Certificates for Neural Network Verification
Marc Fischer*, Christian Sprecher*, Dimitar I. Dimitrov, Gagandeep Singh, Martin Vechev
CAV 2022* Equal contribution
@inproceedings{fischer2022shared, title={Shared certificates for neural network verification}, author={Fischer, Marc and Sprecher, Christian and Dimitrov, Dimitar Iliev and Singh, Gagandeep and Vechev, Martin}, booktitle={Computer Aided Verification: 34th International Conference, CAV 2022, Haifa, Israel, August 7--10, 2022, Proceedings, Part I}, pages={127--148}, year={2022}, organization={Springer}}