Gen Li, Wei Fan, Yuting Wei (Under Revision), Approximate message passing from random initialization with applications to Z2 synchronization.
Gen Li and Yuting Wei (Under Revision), A non-asymptotic framework for approximate message passing in spiked models.
Gen Li, Yuejie Chi, Yuting Wei, Yuxin Chen, Minimax-Optimal Multi-Agent RL in Zero-Sum Markov Games With a Generative Model.
Yuling Yan, Gen Li, Yuxin Chen, Jianqing Fan (Under Review), Model-Based Reinforcement Learning Is Minimax-Optimal for Offline Zero-Sum Markov Games.
Gen Li, Laixi Shi, Yuxin Chen, Yuejie Chi, Yuting Wei (Under Review), Settling the Sample Complexity of Model-Based Offline Reinforcement Learning.
Laixi Shi, Gen Li, Yuting Wei, Yuxin Chen, Yuejie Chi (2022), Pessimistic Q-Learning for Offline Reinforcement Learning: Towards Optimal Sample Complexity, International Conference on Machine Learning (ICML).
Changxiao Cai, Gen Li, H Vincent Poor, Yuxin Chen (2022), Nonconvex Low-Rank Tensor Completion from Noisy Data, Operations Research, 70 (2), pp. 1219-1237.
Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, Yuxin Chen (2022), Sample complexity of asynchronous Q-learning: sharper analysis and variance reduction, IEEE Transactions on Information Theory, 68 (1), pp. 448-473.
Abstract: Gen
Gen Li, Yuxin Chen, Yuejie Chi, Yuantao Gu, Yuting Wei (2021), Sample-Efficient Reinforcement Learning Is Feasible for Linearly Realizable MDPs with Limited Revisiting, Neural Information Processing Systems (NeurIPS).
Gen Li, Laixi Shi, Yuxin Chen, Yuantao Gu, Yuejie Chi (2021), Breaking the Sample Complexity Barrier to Regret-Optimal Model-free Reinforcement Learning, Neural Information Processing Systems (NeurIPS).