Understanding Statistical-vs-Computational Tradeoffs via Low-Degree Polynomials

Alexander Wein – Georgia Institute of Technology

Abstract

A central goal in modern data science is to design algorithms for statistical inference tasks such as community detection, high-dimensional clustering, sparse PCA, and many others. Ideally these algorithms would be both statistically optimal and computationally efficient. However, it often seems impossible to achieve both these goals simultaneously: for many problems, the optimal statistical procedure involves a brute force search while all known polynomial-time algorithms are statistically sub-optimal (requiring more data or higher signal strength than is information-theoretically necessary). In the quest for optimal algorithms, it is therefore important to understand the fundamental statistical limitations of computationally efficient algorithms.

I will discuss an emerging theoretical framework for understanding these questions, based on studying the class of “low-degree polynomial algorithms.” This is a powerful class of algorithms that captures the best known poly-time algorithms for a wide variety of statistical tasks. This perspective has led to the discovery of many new and improved algorithms, and also many matching lower bounds: we now have tools to prove failure of all low-degree algorithms, which provides concrete evidence for inherent computational hardness of statistical problems. This line of work illustrates that low-degree polynomials provide a unifying framework for understanding the computational complexity of a wide variety of statistical tasks, encompassing hypothesis testing, estimation, and optimization

The talk will be primarily based on these papers:

  1. Optimal Spectral Recovery of a Planted Vector in a Subspace
  2. Computational Barriers to Estimation from Low-Degree Polynomials
  3. Optimal Low-Degree Hardness of Maximum Independent Set