Low-Priced Lunch in Conditional Independence Testing

Rajen Shah – University of Cambridge


Testing conditional independence lies at the heart of many statistical problems of interest, providing a formalism for model-free variable significance testing and playing a central role in causal inference, for example. We show however that testing the null that X is independent of Y, given Z, is an impossible statistical task if Z is a continuous random variable: any test must have size at least as large as its power! The conclusion is that some modelling assumptions are necessary to further constrain the null of conditional independence.

Against this negative result, we explain that the common t-test for assessing the significance of X for predicting Y given predictors Z has validity beyond the linear model setting for which it is designed: type I error is controlled for the null of conditional independence if a linear model of X on Z holds, regardless of the form of the regression model for Y. Interestingly, the debiased Lasso, designed for the setting where Z is high-dimensional, only needs a small modification to enjoy this property of validity if either X or Y on Z are (in this case sparse) linear regression models.

Looking beyond making parametric assumptions about regression models, we describe a test for conditional independence whose validity relies primarily on the relatively weak requirement that regressing each of X and Y on Z is able to estimate the relevant conditional expectations sufficiently well. While our general procedure can be tailored to the setting at hand by combining it with any regression technique, we develop theoretical guarantees for kernel ridge regression.