Gradient Methods for Parameter Estimation in High Dimensions

Reza Gheissari – University of California, Berkeley

Abstract

Parametric inference tasks are frequently tackled by optimizing a certain loss function to obtain an estimator of an unknown parameter. A general-purpose approach to this optimization problem consists of gradient methods which iteratively optimize the loss function via gradient updates together, possibly, with some external stochasticity. Unfortunately, when the problem is high-dimensional, this loss function is often complicated and in particular, highly non-convex. We describe how ideas from statistical physics and Markov chain analysis can be imported to understand the success/failure of gradient methods for solving a family of such inference problems, focusing specifically on the role of the initialization and the timescales for which the method runs.

3 Relevant Papers: Can be found at https://www.stat.berkeley.edu/~gheissari/

  • Algorithmic thresholds for tensor PCA, with Gerard Ben Arous and Aukosh Jagannath.
  • Online stochastic gradient descent on non-convex losses from high-dimensional inference, with Gerard Ben Arous and Aukosh Jagannath.
  • Universality of diffusions interacting through a random matrix, with Amir Dembo.