Arun Kumar Kuchibhotla

Arun Kumar Kuchibhotla
  • PhD Student

Contact Information

  • office Address:

    450 Jon M. Huntsman Hall
    3730 Walnut Street
    Philadelphia, 19104

Overview

I completed B.Stat (Bachelors) and M.Stat (Masters) in statistics by 2015 starting from 2010 in Indian Statistical Institute, Kolkata.

Webpage: http://kuchibhotlaarunkumar.wordpress.com

Continue Reading

Research

I am currently working on wavelet non-parametric regression for random designs; the idea is to allow wavelets for single index model. I am interested in inference after selection of tuning parameters. This topic though closely related to post selection inference is different in the sense that I am not interested in model selection, it is more about selection of optimal criterion. I am also interested in self-normalized processes and high-dimensional central limit theorems.

  • Richard A. Berk, Andreas Buja, Lawrence D. Brown, Edward I. George, Arun Kumar Kuchibhotla, Weijie Su, Linda Zhao (2019), Assumption Lean Regression, American Statistician, (in press).

  • Arun Kumar Kuchibhotla and Ayanendranath Basu (Draft), A Minimum Distance Weighted Likelihood Method of Estimation.

  • Arun Kumar Kuchibhotla, Lawrence D. Brown, Andreas Buja (Working), Model-free Study of Ordinary Least Squares Linear Regression.

  • Arun Kumar Kuchibhotla (Draft), Deterministic Inequalities for Smooth M-estimators.

    Abstract: Ever since the proof of asymptotic normality of maximum likelihood estimator by Cramer (1946), it has been understood that a basic technique of the Taylor series expansion suffices for asymptotics of M-estimators with smooth/differentiable loss function. Although the Taylor series expansion is a purely deterministic tool, the realization that the asymptotic normality results can also be made deterministic (and so finite sample) received far less attention. With the advent of big data and high-dimensional statistics, the need for finite sample results has increased. In this paper, we use the (well-known) Banach fixed point theorem to derive various deterministic inequalities that lead to the classical results when studied under randomness. In addition, we provide applications of these deterministic inequalities for crossvalidation/subsampling, marginal screening and uniform-in-submodel results that are very useful for post-selection inference and in the study of post-regularization estimators. Our results apply to many classical estimators, in particular, generalized linear models, non-linear regression and cox proportional hazards model. Extensions to non-smooth and constrained problems are also discussed.

    Description: This paper proves some inequalities for M-estimators that do not require any structure (like independence/dependence) on the observations and hold for any sample size. The results when studied under a specific independence/dependence structure recover the classical results. The applications are very broad.

  • Arun Kumar Kuchibhotla and Abhishek Chakrabortty (Under Review), Moving Beyond Sub-Gaussianity in High-Dimensional Statistics: Applications in Covariance Estimation and Linear Regression.

    Abstract: Concentration inequalities form an essential toolkit in the study of high-dimensional statistical methods. Most of the relevant statistics literature in this regard is, however, based on the assumptions of sub-Gaussian/sub-exponential random vectors. In this paper, we first bring together, via a unified exposition, various probability inequalities for sums of independent random variables under much weaker exponential type (sub-Weibull) tail assumptions. These results extract a part sub-Gaussian tail behavior of the sum in finite samples, matching the asymptotics governed by the central limit theorem, and are compactly represented in terms of a new Orlicz quasi-norm -- the Generalized Bernstein-Orlicz norm -- that typifies such kind of tail behaviors. We illustrate the usefulness of these inequalities through the analysis of four fundamental problems in high-dimensional statistics. In the first two problems, we study the rate of convergence of the sample covariance matrix in terms of the maximum elementwise norm and the maximum $k$-sub-matrix operator norm which are key quantities of interest in bootstrap procedures and high-dimensional structured covariance matrix estimation. The third example concerns the restricted eigenvalue condition, required in high dimensional linear regression, which we verify for all sub-Weibull random vectors under only marginal (not joint) tail assumptions on the covariates. To our knowledge, this is the first unified result obtained in such generality. In the final example, we consider the Lasso estimator for linear regression and establish its rate of convergence to be generally $\sqrt{k\log p/n}$, for $k$-sparse signals, under much weaker tail assumptions (on the errors as well as the covariates) than those in the existing literature. The common feature in all our results is that the convergence rates under most exponential tails match the usual ones obtained under sub-Gaussian assumptions. Finally, we also establish a high-dimensional central limit theorem with a concrete rate bound for sub-Weibulls, as well as tail bounds for suprema of empirical processes. All our results are finite sample.

  • Arun Kumar Kuchibhotla, Lawrence D. Brown, Andreas Buja, Edward I. George, Linda Zhao (Working), A Model Free Perspective for Linear Regression: Uniform-in-model Bounds for Post Selection Inference.

    Abstract: For the last two decades, high-dimensional data and methods have proliferated throughout the literature. The classical technique of linear regression, however, has not lost its touch in applications. Most high-dimensional estimation techniques can be seen as variable selection tools which lead to a smaller set of variables where classical linear regression technique applies. In this paper, we prove estimation error and linear representation bounds for the linear regression estimator uniformly over (many) subsets of variables. Based on deterministic inequalities, our results provide “good” rates when applied to both independent and dependent data. These results are useful in correctly interpreting the linear regression estimator obtained after exploring the data and also in post model-selection inference. All the results are derived under no model assumptions and are non-asymptotic in nature.

  • Debapratim Banerjee, Arun Kumar Kuchibhotla, Somabha Mukherjee (Work In Progress), Cramer-type Large deviation and non-uniform central limit theorems in high dimensions.

    Abstract: Central limit theorems (CLTs) for high-dimensional random vectors with dimension possibly growing with the sample size have received a lot of attention in the recent times. Chernozhukov et al., (2017) proved a Berry--Esseen type result for high-dimensional averages for the class of hyperrectangles and they proved that the rate of convergence can be upper bounded by n^{-1/6} upto a polynomial factor of logp (where n represents the sample size and p denotes the dimension). In the classical literature on central limit theorem, various non-uniform extensions of the Berry--Esseen bound are available. Similar extensions, however, have not appeared in the context of high-dimensional CLT. This is the main focus of our paper. Based on the classical large deviation and non-uniform CLT results for random variables in a Banach space by Bentkus, Rackauskas, and Paulauskas, we prove three non-uniform variants of high-dimensional CLT. In addition, we prove a dimension-free anti-concentration inequality for the absolute supremum of a Gaussian process on a compact metric space.

  • Arun Kumar Kuchibhotla, Lawrence D. Brown, Andreas Buja, Richard A. Berk, Linda Zhao, Edward I. George (Working), Valid Post-selection Inference in Assumption-lean Linear Regression.

    Abstract: This paper provides multiple approaches to perform valid post-selection inference in an assumption-lean regression analysis. To the best of our knowledge, this is the first work that provides valid post-selection inference for regression analysis in such a general settings that include independent, m-dependent random variables.

  • Arun Kumar Kuchibhotla, Rohit Kumar Patra, Bodhisattva Sen (Draft), Efficient Estimation in Convex Single Index Models.

    Abstract: We consider estimation and inference in a single index regression model with an unknown convex link function. We propose two estimators for the unknown link function: (1) a Lipschitz constrained least squares estimator and (2) a shape-constrained smoothing spline estimator. Moreover, both of these procedures lead to estimators for the unknown finite dimensional parameter. We develop methods to compute both the Lipschitz constrained least squares estimator (LLSE) and the penalized least squares estimator (PLSE) of the parametric and the nonparametric components given independent and identically distributed (i.i.d.) data. We prove the consistency and find the rates of convergence for both the LLSE and the PLSE. For both the LLSE and the PLSE, we establish root-n-rate of convergence and semiparametric efficiency of the parametric component under mild assumptions. Moreover, both the LLSE and the PLSE readily yield asymptotic confidence sets for the finite dimensional parameter. We develop the R package "simest" to compute the proposed estimators. Our proposed algorithm works even when n is modest and d is large (e.g., n=500, and d=100).

    Description: Authors listed in alphabetical order.

  • Arun Kumar Kuchibhotla, Somabha Mukherjee, Ayanendranath Basu (Draft), Statistical Inference based on Bridge Divergences.

    Description: M-estimators offer simple robust alternatives to the maximum likelihood estimator. Much of the robustness literature, however, has focused on the problems of location, location-scale and regression estimation rather than on estimation of general parameters. The density power divergence (DPD) and the logarithmic density power divergence (LDPD) measures provide two classes of competitive M-estimators (obtained from divergences) in general parametric models which contain the MLE as a special case. In each of these families, the robustness of the estimator is achieved through a density power down-weighting of outlying observations. Both the families have proved to be very useful tools in the area of robust inference. However, the relation and hierarchy between the minimum distance estimators of the two families are yet to be comprehensively studied or fully established. Given a particular set of real data, how does one choose an optimal member from the union of these two classes of divergences? In this paper, we present a generalized family of divergences incorporating the above two classes; this family provides a smooth bridge between the DPD and the LDPD measures. This family helps to clarify and settle several longstanding issues in the relation between the important families of DPD and LDPD, apart from being an important tool in different areas of statistical inference in its own right.

Teaching

Past Courses

  • STAT111 - INTRODUCTORY STATISTICS

    Introduction to concepts in probability. Basic statistical inference procedures of estimation, confidence intervals and hypothesis testing directed towards applications in science and medicine. The use of the JMP statistical package.

Awards and Honors

  • ISI Jan TInbergen Award, 2015 Description

    The Jan Tinbergen Awards, named after the famous Dutch econometrician, are biannual awards to young statisticians from developing countries for best papers on any topic within the broad field of statistics. Up to three awards are made for each WSC (World Statistics Congress). The award was presented at ISI WSC 2015 held in Rio de Janeiro, Brazil. Some photos are included.