GENERALIZATION AND OPTIMIZATION OF DEEP NETWORKS
MATUS TELGARSKY – UNIVERSITY OF ILLINOIS AT URBANA–CHAMPAIGN
ABSTRACT
From the confusion surrounding the optimization and generalization of deep networks has arisen an exciting possibility: gradient descent is implicitly regularized, meaning it not only outputs iterates of low error, but moreover iterates of low complexity.
This talk starts with a “spectrally-normalized” generalization bound which is small if gradient descent happens to select iterates with certain favorable properties. These properties can be verified in practice, but the bulk of the talk will work towards theoretical guarantees, showing firstly that even stronger properties hold for logistic regression, and secondly for linear networks of arbitrary depth.
BIO
Matus Telgarsky is an assistant professor at the University of Illinois, Urbana-Champaign, specializing in machine learning theory. He was fortunate to receive a PhD at UCSD under the tutelage of Sanjoy Dasgupta. His good fortune has recently continued: in 2017 he was a research fellow at the Simons Institute; in 2018 he received a CAREER award; in 2019 he will co-organize a Simons Institute program on deep learning with Aleksander Madry and Elchanan Mossel.