465 Jon M. Huntsman Hall
3730 Walnut Street
Philadelphia, PA 19104
Research Interests: Statistics and machine learning
The two main interests in my group are:
The group is always looking to expand. We are recruiting PhD students at Penn to work on problems in statistics and machine learning. PhD applicants interested to work with me should mention this on their application. Please apply through both the Statistics department and the AMCS program, as it gives higher chances for admission.
Seminar class in Fall 2019: Topics in Deep Learning (STAT-991), surveying advanced topics in deep learning research based on student presentations. See the Github page for the class materials.
Yinjun Wu, Edgar Dobriban, Susan Davidson (2020), DeltaGrad: Rapid retraining of machine learning models, International Conference on Machine Learning (ICML) 2020.
Fan Yang, Sifan Liu, Edgar Dobriban, David P. Woodruff, How to reduce dimension with PCA and random projections?.
Sifan Liu and Edgar Dobriban (2020), Ridge regression: Structure, cross-validation, and sketching, International Conference on Learning Representations (ICLR).
Alnur Ali, Edgar Dobriban, Ryan J. Tibshirani (2020), The implicit regularization of stochastic gradient flow for least squares, International Conference on Machine Learning (ICML) 2020.
Jonathan Lacotte, Sifan Liu, Edgar Dobriban, Mert Pilanci (Working), Limiting spectrum of randomized hadamard transform and optimal iterative sketching methods.
Edgar Dobriban and Yue Sheng (2020), WONDER: Weighted one-shot distributed ridge regression in high dimensions, Journal of Machine Learning Research (JMLR), 52.
Description: To appear in Biometrika.
Edgar Dobriban, William Leeb, Amit Singer (2020), Optimal prediction in the linearly transformed spiked model, The Annals of Statistics, 48 (1), pp. 491-513.
Description: This paper supersedes the older Dobriban, Leeb, Singer manuscript "PCA from noisy, linearly reduced data: the diagonal case".
This page has links to methods from my papers. Feel free to contact me if you are interested to use them.
The ePCA method for principal component analysis of exponential family data, e.g. Poisson-modeled count data. (with L.T. Liu);
Methods for working with large random data matrices, including
P-value weighting techniques for multiple hypothesis testing. These can improve power in multiple testing, if there is prior information about the individual effect sizes. Includes the iGWAS method for Genome-Wide Association Studies.