Model-Free Selective Inference: From Calibrated Uncertainty to Trusted Decisions

YING JIN – STANFORD UNIVERSITY 

ABSTRACT

AI has shown great potential in accelerating decision-making and scientific discovery pipelines such as drug discovery, marketing, and healthcare. In many applications, predictions from black-box models are used to shortlist candidates whose unknown outcomes satisfy a desired property, e.g., drugs with high binding affinities to a disease target. To ensure the reliability of high-stakes decisions, uncertainty quantification tools such as conformal prediction have been increasingly adopted to understand the variability in black-box predictions. However, we find that the on-average guarantee of conformal prediction can be insufficient for its deployment in decision making which usually has a selective nature.

In this talk, I will introduce a model-free selective inference framework that allows to select reliable decisions with the assistance of any black-box prediction model. Our framework identifies candidates whose unobserved outcomes exceed user-specified values while controlling the average proportion of falsely selected units (FDR), without any modeling assumptions. Leveraging a set of exchangeable training data, our method constructs conformal p-values that quantify the confidence in large outcomes; it then determines a data-dependent threshold for the p-values as a criterion for drawing confident decisions. In addition, I will discuss new ideas to further deal with covariate shifts between training and new samples. We show that in several drug discovery tasks, our methods narrow down the drug candidates to a manageable size of promising ones while controlling the proportion of falsely discovered. In a causal inference dataset, our methods identify students who benefit from an educational intervention, providing new insights for causal effects.

Related Papers: