Recent work has exposed the vulnerability of computer vision models to spatial transformations. Due to the widespread usage of such models in safety-critical applications, it is crucial to quantify their robustness against spatial transformations. However, existing work only provides empirical quantification of spatial robustness via adversarial attacks, which lack provable guarantees. In this work, we propose novel convex relaxations, which enable us, for the first time, to provide a certificate of robustness against spatial transformations. Our convex relaxations are model-agnostic and can be leveraged by a wide range of neural network verifiers. Experiments on several network architectures and different datasets demonstrate the effectiveness and scalability of our method.
Efficient Certification of Spatial Robustness
Anian Ruoss, Maximilian Baader, Mislav Balunović, Martin Vechev
AAAI 2021@inproceedings{ruoss2020spatial, title = {Efficient Certification of Spatial Robustness}, author = {Ruoss, Anian and Baader, Maximilian and Balunović, Mislav and Vechev, Martin}, booktitle = {Thirty-Fifth {AAAI} Conference on Artificial Intelligence, {AAAI} 2021}, year = {2021}, url = {https://arxiv.org/abs/2009.09318} }