RANDOM PERTURBATIONS IN MACHINE LEARNING

AMBUJ TEWARI – UNIVERSITY OF MICHIGAN

ABSTRACT

Hannan proved a fundamental result in online learning leading to a notion now called Hannan consistency. Breiman coined the term “bagging” to denote bootstrap aggregating. Current algorithms for training deep neural networks use a technique called dropout. What do these three ideas have in common?

All three ideas rely on using random perturbations of some sort to enable learning. With help from collaborators, I have been trying to better understand the mathematical properties of random perturbation methods in machine learning, especially in online learning. In this talk, I will briefly describe what we have learned. I will also discuss fascinating questions that remain open. No prior knowledge of online learning will be assumed.

(This talk is based on joint work with Jacob Abernethy, Chansoo Lee, Audra McMillan and Zifan Li.)

BIO

Ambuj Tewari is an associate professor in the Department of Statistics and the Department of EECS (by courtesy) at the University of Michigan, Ann Arbor. His is also affiliated with the Michigan Institute for Data Science (MIDAS). He obtained his PhD under the supervision of Peter Bartlett at the University of California at Berkeley. His research interests lie in machine learning including statistical learning theory, online learning, reinforcement learning and control theory, network analysis, and optimization for machine learning. He collaborates with scientists to seek novel applications of machine learning in mobile health, learning analytics, and computational chemistry. His research has been recognized with paper awards at COLT 2005, COLT 2011, and AISTATS 2015. He received an NSF CAREER award in 2015 and a Sloan Research Fellowship in 2017.