Back to MJ's Publications

Publications about 'Primal-dual algorithms'
Theses
  1. D. Ding. Provable reinforcement learning for constrained and multi-agent control systems. PhD thesis, University of Southern California, 2022. Keyword(s): Constrained Markov decision processes, Constrained nonconvex optimization, Function approximation, Game-agnostic convergence, Multi-agent reinforcement learning, Multi-agent systems, Natural policy gradient, Policy gradient methods, Proximal policy optimization, Primal-dual algorithms, Reinforcement learning, Safe exploration, Safe reinforcement learning, Sample complexity, Stochastic optimization. [bibtex-entry]


Journal articles
  1. D. Ding, K. Zhang, J. Duan, T. Basar, and M. R. Jovanovic. Convergence and sample complexity of natural policy gradient primal-dual methods for constrained MDPs. J. Mach. Learn. Res., 2022. Note: Submitted; also arXiv:2206.02346. Keyword(s): Constrained Markov decision processes, Constrained nonconvex optimization, Function approximation, Natural policy gradient, Policy gradient methods, Primal-dual algorithms, Sample complexity. [bibtex-entry]


  2. D. Ding, X. Wei, Z. Yang, Z. Wang, and M. R. Jovanovic. Fast multi-agent temporal-difference learning via homotopy stochastic primal-dual optimization. IEEE Trans. Automat. Control, 2020. Note: Submitted; also arXiv:1908.02805. Keyword(s): Convex optimization, Distributed temporal-difference learning, Multi-agent systems, Primal-dual algorithms, Reinforcement learning, Stochastic optimization. [bibtex-entry]


Conference articles
  1. D. Ding and M. R. Jovanovic. Policy gradient primal-dual mirror descent for constrained MDPs with large state spaces. In Proceedings of the 61st IEEE Conference on Decision and Control, Cancun, Mexico, 2022. Note: To appear. Keyword(s): Constrained Markov decision processes, Policy gradient methods, Primal-dual algorithms, Mirror descent, Function approximation. [bibtex-entry]


  2. D. Ding, K. Zhang, T. Basar, and M. R. Jovanovic. Convergence and optimality of policy gradient primal-dual method for constrained Markov decision processes. In Proceedings of the 2022 American Control Conference, Atlanta, GA, pages 2851-2856, 2022. Keyword(s): Constrained Markov decision processes, Policy gradient methods, Primal-dual algorithms. [bibtex-entry]


  3. D. Ding, K. Zhang, T. Basar, and M. R. Jovanovic. Natural policy gradient primal-dual method for constrained Markov decision processes. In Proceedings of the 34th Conference on Neural Information Processing Systems, volume 33, Vancouver, Canada, pages 8378-8390, 2020. Keyword(s): Constrained Markov decision processes, Constrained nonconvex optimization, Natural policy gradient, Policy gradient methods, Primal-dual algorithms. [bibtex-entry]


  4. D. Ding, X. Wei, Z. Yang, Z. Wang, and M. R. Jovanovic. Fast multi-agent temporal-difference learning via homotopy stochastic primal-dual method. In Optimization Foundations for Reinforcement Learning Workshop, 33rd Conference on Neural Information Processing Systems, Vancouver, Canada, 2019. Keyword(s): Convex optimization, Distributed temporal-difference learning, Multi-agent systems, Primal-dual algorithms, Reinforcement learning, Stochastic optimization. [bibtex-entry]



Back to MJ's Publications




Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.




Last modified: Sun Oct 23 23:45:07 2022
Author: mihailo.


This document was translated from BibTEX by bibtex2html