Convex relaxations are a key component of training and certifying provably safe neural networks. However, despite substantial progress, a wide and poorly understood accuracy gap to standard networks remains, raising the question of whether this is due to fundamental limitations of convex relaxations. Initial work investigating this question focused on the simple and widely used IBP relaxation. It revealed that some univariate, convex, continuous piecewise linear (CPWL) functions cannot be encoded by any ReLU network such that its IBP-analysis is precise. To explore whether this limitation is shared by more advanced convex relaxations, we conduct the first in-depth study on the expressive power of ReLU networks across all commonly used convex relaxations. We show that: (i) more advanced relaxations allow a larger class of univariate functions to be expressed as precisely analyzable ReLU networks, (ii) more precise relaxations can allow exponentially larger solution spaces of ReLU networks encoding the same functions, and (iii) even using the most precise single-neuron relaxations, it is impossible to construct precisely analyzable ReLU networks that express multivariate, convex, monotone CPWL functions.
Expressivity of ReLU-Networks under Convex Relaxations
Maximilian Baader*, Mark Niklas Müller*, Yuhao Mao, Martin Vechev
ICLR 2024* Equal contribution
@misc{ BaaderMMV2023, title={Expressivity of ReLU-Networks under Convex Relaxations}, author={Maximilian Baader and Mark Niklas M{\"{u}}ller and Yuhao Mao and Martin Vechev}, booktitle = {The Twelfth International Conference on Learning Representations}, year={2024} }