ON THE STATISTICAL FOUNDATIONS OF ADVERSARIALLY ROBUST LEARNING
EDGAR DOBRIBAN – UNIVERSITY OF PENNSYLVANIA
Robustness has long been viewed as an important desired property of statistical methods. More recently, it has been recognized that complex prediction models such as deep neural nets can be highly vulnerable to adversarially chosen perturbations of their outputs at test time. This area, termed adversarial robustness, has garnered an extraordinary amount of attention in the machine learning community over the last few years. However, little is known about the most basic statistical questions. In this talk, I will present answers to some of them.
This is joint work with Hamed Hassani, David Hong, and Alex Robey.