Statistical Guarantees for Trustworthy Deep Learning

Osbert Bastani – University of Pennsylvania


Despite their tremendous success, neural networks have a number of shortcomings compared to traditional models: they are overconfident, lack robustness, and are hard to interpret. In this talk, I will describe two strategies for addressing these issues through the lens of statistics. First, I will describe a strategy for quantifying the uncertainty of arbitrary blackbox models by constructing prediction intervals and sets with PAC-style guarantees even in the face of covariate shift. In particular, we adapt conformal prediction to the covariate shift setting when the importance weights are approximately known. Second, I will describe our work on understanding parameter identification of neural networks, which is complicated by the fact that neural networks possess symmetries that prohibit identification in the traditional sense. We provide guarantees on identifying the parameters (modulo symmetry) of a one-layer neural network with quadratic or ReLU activations. Furthermore, we show how these guarantees enable bandit and transfer learning algorithms for neural networks, and ensure robust generalization in the presence of covariate shift. I will conclude with ongoing work on overparameterized ReLU networks and one-layer convolutional networks, as well as an application of parameter identification to interpreting the internal representations of models trained to predict RNA splicing.


Osbert Bastani is a research assistant professor at the Department of Computer and Information Science at the University of Pennsylvania. He is broadly interested in trustworthy machine learning. Previously, he completed his Ph.D. in computer science from Stanford and his A.B. in mathematics from Harvard.