Nonparametric Empirical Bayes Inference

Nikolaos Ignatiadis – Stanford University

Abstract

In an empirical Bayes analysis, we use data from repeated sampling to imitate inferences made by an oracle Bayesian with extensive knowledge of the data-generating distribution. Existing results provide a comprehensive characterization of when and why empirical Bayes point estimates accurately recover oracle Bayes behavior. In the first part of this talk, we construct flexible and practical nonparametric confidence intervals that provide asymptotic frequentist coverage of empirical Bayes estimands, such as the posterior mean and the local false sign rate. From a methodological perspective, we build upon results on affine minimax estimation, and our coverage statements hold even when estimands are only partially identified or when empirical Bayes point estimates converge very slowly. In the second part of the talk, we apply these ideas to study randomization-based inference for treatment effects in the regression discontinuity design under a model where the running variable has exogenous measurement error.

The following papers are most relevant:

https://arxiv.org/abs/1902.02774 (forthcoming discussion paper in JASA T&M) and https://arxiv.org/abs/2004.09458 (arXiv version will be updated in January). In the first paper, we operationalize results on the modulus of continuity and on minimax affine estimation of linear functionals to facilitate practical nonparametric inference in the empirical Bayes problem.