Breaking the Sample Size Barrier in Reinforcement Learning

Yuting Wei – Carnegie Mellon University

Abstract

Reinforcement learning (RL), which is frequently modeled as sequential learning and decision making in the face of uncertainty, is garnering growing interest in recent years due to its remarkable success in practice. In contemporary RL applications, it is increasingly more common to encounter environments with prohibitively large state and action space, thus imposing stringent requirements on the sample efficiency of the RL algorithms in use. Despite the empirical success, however, the theoretical underpinnings for many popular RL algorithms remain highly inadequate even for the tabular setting.

In this talk, we present two vignettes regarding the sample efficiency of RL algorithms. The first vignette demonstrates that a perturbed model-based RL approach is minimax optimal under a generative model, without suffering from a sample size barrier that was present in all past work. In the second vignette, we pin down the sample complexity of Q-learning on Markovian samples, which substantially improves upon prior results by a factor at least as large as the dimension of the state-action space. These results cover two distinctive RL paradigms, and might shed light on the efficacy of these algorithms in more complicated scenarios.

Bio

Yuting Wei is currently an assistant professor in the Statistics and Data Science Department at Carnegie Mellon University. Prior to that, she was a Stein Fellow at Stanford University, and she received her Ph.D. in statistics at University of California, Berkeley working with Martin Wainwright and Aditya Guntuboyina. She was the recipient of the 2018 Erich L. Lehmann Citation from the Berkeley statistics department for her Ph.D. dissertation in theoretical statistics. Her research interests include high-dimensional and non-parametric statistics, statistical machine learning, and reinforcement learning.