Algorithmic Thresholds in Random Optimization Problems

BRICE HUANG –  MASSACHUSETTS INSTITUTE OF TECHNOLOGY

ABSTRACT

Optimizing high-dimensional functions generated from random data is a central problem in modern statistics and machine learning. As these objectives are highly non-convex, the maximum value reachable by efficient algorithms is usually smaller than the maximum value that exists, and characterizing the fundamental computational limits of these problems is a difficult challenge.

In this talk, I will describe the “branching overlap gap property,” a new technique I have developed that obtains sharp algorithmic thresholds in several random optimization problems. We focus on two prototypical families of random functions, namely the Hamiltonians of mean-field spin glasses and random perceptrons. These models are widely studied in probability, statistical physics, and computer science, and are related to problems in high-dimensional statistical inference and neural networks. We exactly characterize the maximum value achievable by a broad class of stable algorithms, which includes gradient descent, Langevin dynamics, and general first-order methods on dimension-free time scales. We also identify this value with a phase transition in the solution landscape, yielding a unified geometric description of the algorithmic threshold that holds across several problems.

Related Papers