I am a fourth-year year Ph.D. student majoring in computer science at the Institute for Interdisciplinary Information Sciences in Tsinghua University. I am very fortunate to be advised by Professor Andrew Chi-Chih Yao, who is the A.M. Turing laureate of 2000. I received my B.S. degree in artificial intelligence from Peking University in 2021, advised by Professor Liwei Wang.
My research lies at the intersection of theoretical and applied machine learning. On the theoretical side, I am interested in establishing provable guarantees for the generalization and optimization of machine learning algorithms. On the empirical side, I have hands-on experience with large-scale LLM pre-training and am committed to designing efficient optimization algorithms that improve the scalability and performance pre-training. I have worked on topics including:
- Parameter Efficient Fine-tuning of LLMs.
- Scalable model merging.
- Generalization guarantees of machine learning algorithms.
- Implicit bias and their empirical signals.
- Optimization algorithms for structured problems.
Publications
Understanding Nonlinear Implicit Bias via Region Counts in Input Space
Published in ICML, 2025
Jingwei Li*, Jing Xu*, Zifan Wang, Huishuai Zhang, Jingzhao Zhang
Scalable Model Merging with Progressive Layer-wise Distillation
Published in ICML, 2025
Jing Xu, Jiazheng Li, Jingzhao Zhang
Functionally Constrained Algorithm Solves Convex Simple Bilevel Problems
Published in Neurips, 2024
Huaqing Zhang*, Lesi Chen*, Jing Xu, Jingzhao Zhang
Random Masking Finds Winning Tickets for Parameter Efficient Fine-tuning
Published in ICML, 2024
Jing Xu, Jingzhao Zhang
On Bilevel Optimization without Lower-level Strong Convexity
Published in COLT, 2024
Lesi Chen*, Jing Xu*, JingZhao Zhang
Towards Data-Algorithm Dependent Generalization Analysis: a Case Study on Overparameterized Linear Regression
Published in Neurips, 2023
Jing Xu*, Jiaye Teng*, Yang Yuan, Andrew C Yao
Quantifying the Variability Collapse of Neural Networks
Published in ICML, 2023
Jing Xu*, Haoxiong Liu*
Faster Gradient-Free Algorithms for Nonsmooth Nonconvex Stochastic Optimization
Published in ICML, 2023
Lesi Chen, Jing Xu, Luo Luo
Preprints
FedCM: Federated Learning with Client-level Momentum
Jing Xu, Sen Wang, Liwei Wang, Andrew C Yao