Median Bias, HulC, and Valid Inference

Arun Kumar Kuchibhotla  – Carnegie Mellon University

headshot

Abstract

Confidence intervals for functionals are an integral part of statistical inference. Traditional methods of constructing confidence intervals rely on studying the limiting distribution of an estimator of the functional and then estimating the unknown parameters of the limiting distribution. This is crucial for methods including the Wald intervals, bootstrap intervals, and the subsampling intervals (among many others). In this talk, I will argue that the median bias of an estimator (as opposed to the limiting distribution) is more central to the problem of performing inference. Firstly, we introduce an inference methodology called the HulC that uses the convex hull of several independent estimators as a confidence interval. Validity of HulC intervals only requires control of the median bias of the estimator and hence, is more widely applicable than the Wald, bootstrap, and subsampling. Secondly, we prove that (asymptotically) valid inference for a functional is possible if and only if there exists an estimator that is (asymptotically) median unbiased. The same holds true for uniformly valid (honest) inference. This leads to a new notion of regularity we call “median regularity” which is necessary for uniformly valid confidence intervals. The classical notion of regular estimators is not necessary for uniformly valid inference while median regularity is both necessary and sufficient.