Guanghui Wang

PhD Student in Machine Learning
School of Computer Science
Georgia Institute of Technology
Office: CODA 12th Floor S1249J

Google Scholar | Twitter

About Me

Hello! I am Guanghui Wang (王广辉), a fifth-year PhD student in Machine Learning at Georgia Institute of Technology. I am very fortunate to be advised by Prof. Jake Abernethy and Prof. Vidya Muthukumar.

Before joining Georgia Tech, I obtained my M.S. degree from Department of Computer Science and Technology in Nanjing University in 2020, where I was very fortunate to be advised by Prof. Lijun Zhang. I was also a member of the LAMDA group, led by Prof. Zhi-Hua Zhou. I received my B.E. degree from School of Electronic Engineering in Xidian University in 2017.

During the summer of 2025, I interned at Apple with Satyen Kale working on fedreated optimization for variational inequalies.

I am interested in online learning, game theory, and stochastic optimization.

I am a recipient of the 2025 Apple Scholars in AI/ML PhD fellowship.

Preprints

  1. Faster Rates For Federated Variational Inequalities
    Guanghui Wang, Satyen Kale.
    Preprint.

Publications

  1. Multi-distribution Learning: From Worst-Case Optimality to Lexicographic Min-Max Optimality
    Guanghui Wang, Umar Syed, Robert Schapire, Jacob Abernethy.
    In ALT, 2026.
  2. Last-iterate Convergence for Symmetric, General-sum, 2×2 Games Under The Exponential Weights Dynamic
    Guanghui Wang, Krishna Acharya, Lokranjan Lakshmikanthan, Juba Ziani, Vidya Muthukumar.
    In ALT, 2026.
  3. Faster Margin Maximization Rates for Generic and Adversarially Robust Optimization Methods
    Guanghui Wang, Zihao Hu, Claudio Gentile, Vidya Muthukumar, Jacob Abernethy.
    In Mathematical Programming, 2025.
  4. Extragradient Type Methods for Riemannian Variational Inequality Problems
    Zihao Hu, Guanghui Wang, Xi Wang, Andre Wibisono, Jacob Abernethy, Molei Tao.
    In AISTATS, 2024.
  5. Faster Margin Maximization Rates for Generic Optimization Methods
    Guanghui Wang, Zihao Hu, Vidya Muthukumar, Jacob Abernethy.
    In NeurIPS, 2023 (Spotlight).
  6. On Riemannian Projection-free Online Learning
    Zihao Hu, Guanghui Wang, Jacob Abernethy.
    In NeurIPS, 2023.
  7. Minimizing Dynamic Regret on Geodesic Metric Spaces
    Zihao Hu, Guanghui Wang, Jacob Abernethy.
    In COLT, 2023.
  8. On Accelerated Perceptrons and Beyond
    Guanghui Wang, Rafael Hanashiro, Etash Guha, Jacob Abernethy.
    In ICLR, 2023.
  9. Adaptive Oracle-Efficient Online Learning
    Guanghui Wang, Zihao Hu, Vidya Muthukumar, Jacob Abernethy.
    In NeurIPS, 2022.
  10. A Simple yet Universal Strategy for Online Convex Optimization
    Lijun Zhang, Guanghui Wang, Jinfeng Yi, Tianbao Yang.
    In ICML, 2022.
  11. Momentum Accelerates the Convergence of Stochastic AUPRC Maximization
    Guanghui Wang, Ming Yang, Lijun Zhang, Tianbao Yang.
    In AISTATS, 2022.
  12. Projection-free Distributed Online Learning with Sublinear Communication Complexity
    Yuanyu Wan, Guanghui Wang, Wei-Wei Tu, Lijun Zhang.
    In JMLR, 2022.
  13. Online Convex Optimization with Continuous Switching Constraint
    Guanghui Wang, Yuanyu Wan, Tianbao Yang, Lijun Zhang
    In NeurIPS, 2021.
  14. Dual Adaptivity: A Universal Algorithm for Minimizing the Adaptive Regret of Convex Functions
    Lijun Zhang, Guanghui Wang, Wei-Wei Tu, Wei Jiang, Zhi-Hua Zhou.
    In NeurIPS, 2021.
  15. Stochastic Graphical Bandits with Adversarial Corruptions
    Shiyin Lu, Guanghui Wang, Lijun Zhang.
    In AAAI, 2021.
  16. Bandit Convex Optimization in Non-stationary Environments
    Peng Zhao, Guanghui Wang, Lijun Zhang, Zhi-Hua Zhou.
    In JMLR, 2021.
  17. Sadam: A Variant of Adam for Strongly Convex Functions
    Guanghui Wang, Shiyin Lu, Quan Cheng, Wei-Wei Tu, Lijun Zhang.
    In ICLR, 2020.
  18. Bandit Convex Optimization in Non-stationary Environments.
    Peng Zhao, Guanghui Wang, Lijun Zhang, Zhi-Hua Zhou.
    In AISTATS, 2020.
  19. Adapting to Smoothness: A More Universal Algorithm for Online Convex Optimization
    Guanghui Wang, Shiyin Lu, Yao Hu, Lijun Zhang.
    In AAAI, 2020.
  20. Nearly Optimal Regret for Stochastic Linear Bandits with Heavy-Tailed Payoffs
    Bo Xue, Guanghui Wang, Yimu Wang, Lijun Zhang.
    In IJCAI, 2020.
  21. Adaptivity and Optimality: A Universal Algorithm for Online Convex Optimization
    Guanghui Wang, Shiyin Lu, Lijun Zhang.
    In UAI, 2019.
  22. Multi-Objective Generalized Linear Bandits
    Shiyin Lu, Guanghui Wang, Yao Hu, Lijun Zhang.
    In IJCAI, 2019.
  23. Optimal Algorithms for Lipschitz Bandits with Heavy-Tailed Rewards
    Shiyin Lu, Guanghui Wang, Yao Hu, Lijun Zhang.
    In ICML, 2019.
  24. Minimizing Adaptive Regret with One Gradient per Iteration
    Guanghui Wang, Dakuan Zhao, Lijun Zhang.
    In IJCAI, 2018.

Academic Service

Reviewer: COLT, ALT, ICML, ICLR, NeurIPS, AISTATS, TMLR

Teaching

  1. TA, ECE 8803 Online Decision Making in Machine Learning, Fall 2021
  2. TA, co-instructor (6 lectures), ECE 8803 Online Decision Making in Machine Learning, Fall 2022
  3. TA, co-instructor (6 lectures), CS7545 Machine Learning Theory, Spring 2023
  4. co-instructor (2 lectures), ECE 8803 Online Decision Making in Machine Learning, Fall 2023

Working\Visiting Experience

  1. Visiting Graduate Student, Spring 2021, Simons Institute for the Theory of Computing.
  2. Reseach Assistant, Fall 2020 - Fall 2021, Nanjing University.

Awards

  1. Apple Scholars in AI/ML PhD fellowship, 2025.
  2. ARC-ACO Fellowship, 2022.
  3. National Scholarship, 2014, 2018.