305 Academic Research Building
265 South 37th Street
Philadelphia, PA 19104
Research Interests: Statistics and machine learning
Postdoctoral research fellow positions available to work on the following topics. Please see ads and send e-mail to Edgar:
The group is always looking to expand. We are recruiting PhD students at Penn to work on problems in statistics and machine learning. PhD applicants interested to work with me should mention this on their application. Please apply through both the Statistics department and the AMCS program, as it gives higher chances for admission.
Seminar class in Fall 2019: Topics in Deep Learning (STAT-991), surveying advanced topics in deep learning research based on student presentations. See the Github page for the class materials.
Lingjiao Chen, Leshang Chen, Hongyi Wang, Susan Davidson, Edgar Dobriban Solon: Communication-efficient Byzantine-resilient Distributed Training via Redundant Gradients.
Dominic Richards, Edgar Dobriban, Patrick Rebeschini, Comparing Classes of Estimators: When does Gradient Descent Beat Ridge Regression in Linear Models?.
Description: Authors: Dominic Richards, Edgar Dobriban, Patrick Rebeschini
Zitong Yang, Yaodong Yu, Edgar Dobriban, Jacob Steinhardt, Yi Ma, Understanding Generalization in Adversarial Training via the Bias-Variance Decomposition.
David Hong, Rounak Dey, Xihong Lin, Brian Cleary, Edgar Dobriban, HYPER: Group testing via hypergraph factorization applied to COVID-19.
Xiaoxia Wu, Edgar Dobriban, Tongzheng Ren, Shanshan Wu, Zhiyuan Li, Suriya Gunasekar, Rachel Ward, Qiang Liu (2020), Implicit regularization of normalization methods, Neural Information Processing Systems (NeurIPS) 2020.
Jonathan Lacotte, Sifan Liu, Edgar Dobriban, Mert Pilanci (2020), Limiting spectrum of randomized hadamard transform and optimal iterative sketching methods, Neural Information Processing Systems (NeurIPS) 2020.
This page has links to methods from my papers. Feel free to contact me if you are interested to use them.
The ePCA method for principal component analysis of exponential family data, e.g. Poisson-modeled count data. (with L.T. Liu);
Methods for working with large random data matrices, including
P-value weighting techniques for multiple hypothesis testing. These can improve power in multiple testing, if there is prior information about the individual effect sizes. Includes the iGWAS method for Genome-Wide Association Studies.