Towards a Less Conservative Theory of Machine Learning: Unstable Optimization and Implicit Regularization
JINGFENG WU – UNIVERSITY OF CALIFORNIA, BERKELEY
ABSTRACT
Deep learning’s empirical success challenges the “conservative” nature of classical optimization and statistical learning theories. Classical theory mandates small stepsizes for training stability and explicit regularization for complexity control. Yet, deep learning leverages mechanisms that thrive beyond these traditional boundaries. In this talk, I present a research program dedicated to building a less conservative theoretical foundation by demystifying two such mechanisms:
- Unstable Optimization: I show that large stepsizes, despite causing local oscillations, accelerate the global convergence of gradient descent (GD) in overparameterized logistic regression.
- Implicit Regularization: I show that the implicit regularization of early-stopped GD statistically dominates explicit $\ell_2$-regularization across all linear regression problem instances.
I further showcase how the theoretical principles lead to practice-relevant algorithmic designs (such as Seesaw for reducing serial steps in large language model pretraining). I conclude by outlining a path towards a rigorous understanding of modern learning paradigms.

