Random Smoothing Might be Unable to Certify ℓ∞ Robustness for High-Dimensional Images
Avrim Blum, Travis Dick, Naren Manoj, Hongyang Zhang; 21(211):1−21, 2020.
Abstract
We show a hardness result for random smoothing to achieve certified adversarial robustness against attacks in the ℓp ball of radius ϵ when p>2. Although random smoothing has been well understood for the ℓ2 case using the Gaussian distribution, much remains unknown concerning the existence of a noise distribution that works for the case of p>2. This has been posed as an open problem by Cohen et al. (2019) and includes many significant paradigms such as the ℓ∞ threat model. In this work, we show that any noise distribution D over Rd that provides ℓp robustness for all base classifiers with p>2 must satisfy Eη2i=Ω(d1−2/pϵ2(1−δ)/δ2) for 99% of the features (pixels) of vector η∼D, where ϵ is the robust radius and δ is the score gap between the highest-scored class and the runner-up. Therefore, for high-dimensional images with pixel values bounded in [0,255], the required noise will eventually dominate the useful information in the images, leading to trivial smoothed classifiers.
© JMLR 2020. (edit, beta) |