Skip to content
Skip to main menu
# Andreas Buja

## Contact Information

## Overview

## Education

## Academic Positions Held

## Other Positions

## Professional Leadership

Continue Reading
## Research

## Teaching

## Current Courses

## Past Courses

## Awards and
Honors

## In the
News

## Knowledge @ Wharton

## Activity

### Latest Research

### In the News

Different Worlds: Do Recommender Systems Fragment Consumers’ Interests?All News### Awards and Honors

- The Liem Sioe Liong/First Pacific Company Professor
- Professor of Statistics

**Primary Email:**

buja.at.wharton@gmail.com

**office Address:**471 Jon M. Huntsman Hall

3730 Walnut Street

Philadelphia, PA 19104

**Research Interests: **data visualization, multivariate statistics, nonparametric statistics

**Links:**
CV

PhD, Swiss Federal Institute of Technology (ETHZ), 1980

Wharton: 2002-present (name Liem Sioe Liong/ First Pacific Company Professor, 2003).

Previous appointment: University of Washington, Seattle. Visiting appointment: Stanford University

Member, Technical Staff, Bellcore/Telcordia, 1987-94

Member, Technical Staff, AT&T Bell Labs, 1994-96

Technology Consultant, AT&T Labs, 1996-2001

Editor, Journal of Computational and Graphical Statistics, 1997-2001

Advisory Editor, Journal of Computational and Graphical Statistics, 2001-present

For more information, go to My Personal Page

Andreas Buja and Wolfgang Rolke (Work In Progress),

**Calibration for Simultaneity: (Re)sampling Methods for Simultaneous Inference with Applications to Function Estimation and Functional Data**.**Abstract:**We survey and illustrate a Monte Carlo technique for carrying out simple simultaneous inference with arbitrarily many statistics. Special cases of the technique have appeared in the literature, but there exists widespread unawareness of the simplicity and broad applicability of this solution to simultaneous inference. The technique, here called “calibration for simultaneity" or CfS , consists of 1) limiting the search for coverage regions to a one-parameter family of nested regions, and 2) selecting from the family that region whose estimated coverage probability has the desired value. Natural one-parameter families are almost always available. CfS applies whenever inference is based on a single distribution, for example: 1) fixed distributions such as Gaussians when diagnosing distributional assumptions, 2) conditional null distributions in exact tests with Neyman structure, in particular permutation tests, 3) bootstrap distributions for bootstrap standard error bands, 4) Bayesian posterior distributions for high-dimensional posterior probability regions, or 5) predictive distributions for multiple prediction intervals. CfS is particularly useful for estimation of any type of function, such as empirical Q-Q curves, empirical CDFs, density estimates, smooths, generally any type of _t, and functions estimated from functional data. A special case of CfS is equivalent to p-value adjustment (Westfall and Young, 1993). Conversely, the notion of a p-value can be extended to any simultaneous coverage problem that is solved with a one-parameter family of coverage regions.Andreas Buja, Natalia Volfovsky, Abba M. Krieger, Catherine Lord, Michael Wigler, Ivan Iossifov (2018),

**Damaging De Novo Mutations Diminish Motor Skills in Children on the Autism Spectrum**,*Proceedings of the National Academy of Sciences of the United States of America*, (forthcoming).Daniel McCarthy, Kai Zhang, Lawrence D. Brown, Richard A. Berk, Andreas Buja, Edward I. George, Linda Zhao (2018),

**Calibrated Percentile Double Bootstrap For Robust Linear Regression Inference**,*Statistica Sinica*, (in press).Richard A. Berk, Lawrence D. Brown, Andreas Buja, Edward I. George, Linda Zhao (2018),

**Working with Misspecified Regression Models**,*Journal of Quantitative Criminology*, (in press).Arun Kumar Kuchibhotla, Lawrence D. Brown, Andreas Buja, Edward I. George, Linda Zhao (Working),

**A Model Free Perspective for Linear Regression: Uniform-in-model Bounds for Post Selection Inference**.**Abstract:**For the last two decades, high-dimensional data and methods have proliferated throughout the literature. The classical technique of linear regression, however, has not lost its touch in applications. Most high-dimensional estimation techniques can be seen as variable selection tools which lead to a smaller set of variables where classical linear regression technique applies. In this paper, we prove estimation error and linear representation bounds for the linear regression estimator uniformly over (many) subsets of variables. Based on deterministic inequalities, our results provide “good” rates when applied to both independent and dependent data. These results are useful in correctly interpreting the linear regression estimator obtained after exploring the data and also in post model-selection inference. All the results are derived under no model assumptions and are non-asymptotic in nature.Arun Kumar Kuchibhotla, Lawrence D. Brown, Andreas Buja, Richard A. Berk, Linda Zhao, Edward I. George (Working),

**Valid Post-selection Inference in Assumption-lean Linear Regression**.**Abstract:**This paper provides multiple approaches to perform valid post-selection inference in an assumption-lean regression analysis. To the best of our knowledge, this is the first work that provides valid post-selection inference for regression analysis in such a general settings that include independent, m-dependent random variables.Nermin Eyuboglu, Sertan Kabadayi, Andreas Buja (2017),

**Multiple Channel Complexity: Conceptualization and Measurement**,*Industrial Marketing Management*, 65, pp. 194-205.Kenny Ye, Ivan Iossifov, Dan Levy, Boris Yamrom, Andreas Buja, Abba M. Krieger, Michael Wigler (2017),

**Measuring Shared Variants in Cohorts of Discordant Siblings with Applications to Autism**,*Proceedings of the National Academy of Sciences of the United States of America PNAS, Proceedings of the National Academy of Sciences*, 114 (27), pp. 7073-7076.Andreas Buja, Richard A. Berk, Lawrence D. Brown, Edward I. George, Arun Kumar Kuchibhotla, Linda Zhao (2016),

**Models as Approximations, Part II: A General Theory of Model-Robust Regression**,*Statistical Science*, (submitted).Andreas Buja, Richard A. Berk, Lawrence D. Brown, Edward I. George, Emil Pitkin, Mikhail Traskin, Linda Zhao, Kai Zhang (2016),

**Models as Approximations, Part I: A Conspiracy of Nonlinearity and Random Regressors in Linear Regression**,*Statistical Science*, (revision submitted).

### STAT470 - Data Analytics And Statistical Computing

This course will introduce a high-level programming language, called R, that is widely used for statistical data analysis. Using R, we will study and practice the following methodologies: data cleaning, feature extraction; web scrubbing, text analysis; data visualization; fitting statistical models; simulation of probability distributions and statistical models; statistical inference methods that use simulations (bootstrap, permutation tests).

STAT470401 ( Syllabus )

STAT470402 ( Syllabus )

### STAT503 - Data Analytics And Statistical Computing

This course will introduce a high-level programming language, called R, that is widely used for statistical data analysis. Using R, we will study and practice the following methodologies: data cleaning, feature extraction; web scrubbing, text analysis; data visualization; fitting statistical models; simulation of probability distributions and statistical models; statistical inference methods that use simulations (bootstrap, permutation tests).

STAT503401 ( Syllabus )

STAT503402 ( Syllabus )

### STAT770 - Data Analytics And Statistical Computing

This course will introduce a high-level programming language, called R, that is widely used for statistical data analysis. Using R, we will study and practice the following methodologies: data cleaning, feature extraction; web scrubbing, text analysis; data visualization; fitting statistical models; simulation of probability distributions and statistical models; statistical inference methods that use simulations (bootstrap, permutation tests).

STAT770401 ( Syllabus )

STAT770402 ( Syllabus )

### STAT961 - Statistical Methodology

This is a course that prepares 1st year PhD students in statistics for a research career. This is not an applied statistics course. Topics covered include: linear models and their high-dimensional geometry, statistical inference illustrated with linear models, diagnostics for linear models, bootstrap and permutation inference, principal component analysis, smoothing and cross-validation.

STAT961001

### STAT101 - Introductory Business Statistics

Data summaries and descriptive statistics; introduction to a statistical computer package; Probability: distributions, expectation, variance, covariance, portfolios, central limit theorem; statistical inference of univariate data; Statistical inference for bivariate data: inference for intrinsically linear simple regression models. This course will have a business focus, but is not inappropriate for students in the college.

### STAT102 - Introductory Business Statistics

Continuation of STAT 101. A thorough treatment of multiple regression, model selection, analysis of variance, linear logistic regression; introduction to time series. Business applications.

### STAT470 - Data Analytics and Statistical Computing

### STAT503 - Data Analytics and Statistical Computing

### STAT621 - Accelerated Regression Analysis for Business

STAT 621 is intended for students with recent, practical knowledge of the use of regression analysis in the context of business applications. This course covers the material of STAT 613, but omits the foundations to focus on regression modeling. The course reviews statistical hypothesis testing and confidence intervals for the sake of standardizing terminology and introducing software, and then moves into regression modeling. The pace presumes recent exposure to both the theory and practice of regression and will not be accommodating to students who have not seen or used these methods previously. The interpretation of regression models within the context of applications will be stressed, presuming knowledge of the underlying assumptions and derivations. The scope of regression modeling that is covered includes multiple regression analysis with categorical effects, regression diagnostic procedures, interactions, and time series structure. The presentation of the course relies on computer software that will be introduced in the initial lectures.

### STAT770 - Data Analytics and Statistical Computing

### STAT926 - Multivariate Analysis: Methodology

This is a course that prepares PhD students in statistics for research in multivariate statistics and data visualization. The emphasis will be on a deep conceptual understanding of multivariate methods to the point where students will propose variations and extensions to existing methods or whole new approaches to problems previously solved by classical methods. Topics include: principal component analysis, canonical correlation analysis, generalized canonical analysis; nonlinear extensions of multivariate methods based on optimal transformations of quantitative variables and optimal scaling of categorical variables; shrinkage- and sparsity-based extensions to classical methods; clustering methods of the k-means and hierarchical varieties; multidimensional scaling, graph drawing, and manifold estimation.

### STAT961 - Statistical Methodology

This is a course that prepares 1st year PhD students in statistics for a research career. This is not an applied statistics course. Topics covered include: linear models and their high-dimensional geometry, statistical inference illustrated with linear models, diagnostics for linear models, bootstrap and permutation inference, principal component analysis, smoothing and cross-validation.

### STAT995 - Dissertation

### STAT999 - Independent Study

- Keynote speaker, Classification Society Conference, Milwaukee, WI, USA, 2013
- Infovis best paper award for the article “Graphical inference for infovis” by Wickham, H., Cook, D., Hofmann, H., and Buja, A. IEEE Transactions on Visualization and Computer Graphics (Proc. InfoVis’10)., 2010
- Journal of Marketing, finalist for the Harold H. Maynard Award and featured blog article of the October Issue, 2007
- Keynote speaker, SIAM Conference on Datamining (SDM06), Bethesda, MD, USA, 2006
- Fellow, Institute of Mathematical Statistics, 2006
- IMS Medallion lecture, Joint Statistical Meetings, New York, 2002
- Keynote speaker, European Meeting of the Psychometric Society, Leiden, 1995
- Fellow, American Statistical Association, 1994
- Award Medal for diploma thesis in mathematics, Swiss Federal Institute of Technology, 1975

- Different Worlds: Do Recommender Systems Fragment Consumers’ Interests?, Knowledge @ Wharton - 08/31/2011

Andreas Buja and Wolfgang Rolke (Work In Progress), **Calibration for Simultaneity: (Re)sampling Methods for Simultaneous Inference with Applications to Function Estimation and Functional Data**.

All ResearchThe rise of computer-driven recommendation systems designed to help consumers navigate a growing ocean of choice is prompting concerns that the hyperpersonalization of information sources will lead to harmful divisions throughout society. A new study on consumer purchasing patterns in the music industry suggests the opposite. The paper, by Wharton researchers Kartik Hosanagar, Andreas Buja and Daniel M. Fleder, is titled, *"*Will the Global Village Fracture into Tribes: Recommender Systems and their Effects on the Consumer.*" *

Keynote speaker, Classification Society Conference, Milwaukee, WI, USA 2013

All Awards