Back to MJ's Publications
Publications about 'Constrained Markov decision processes'
|
-
D. Ding.
Provable reinforcement learning for constrained and multi-agent control systems.
PhD thesis,
University of Southern California,
2022.
Keyword(s): Constrained Markov decision processes,
Constrained nonconvex optimization,
Function approximation,
Game-agnostic convergence,
Multi-agent reinforcement learning,
Multi-agent systems,
Natural policy gradient,
Policy gradient methods,
Proximal policy optimization,
Primal-dual algorithms,
Reinforcement learning,
Safe exploration,
Safe reinforcement learning,
Sample complexity,
Stochastic optimization.
[bibtex-entry]
-
D. Ding,
K. Zhang,
J. Duan,
T. Basar,
and M. R. Jovanovic.
Convergence and sample complexity of natural policy gradient primal-dual methods for constrained MDPs.
J. Mach. Learn. Res.,
2022.
Note: Submitted; also arXiv:2206.02346.
Keyword(s): Constrained Markov decision processes,
Constrained nonconvex optimization,
Function approximation,
Natural policy gradient,
Policy gradient methods,
Primal-dual algorithms,
Sample complexity.
[bibtex-entry]
-
D. Ding and M. R. Jovanovic.
Policy gradient primal-dual mirror descent for constrained MDPs with large state spaces.
In Proceedings of the 61st IEEE Conference on Decision and Control,
Cancun, Mexico,
pages 4892-4897,
2022.
Keyword(s): Constrained Markov decision processes,
Policy gradient methods,
Primal-dual algorithms,
Mirror descent,
Function approximation.
[bibtex-entry]
-
D. Ding,
K. Zhang,
T. Basar,
and M. R. Jovanovic.
Convergence and optimality of policy gradient primal-dual method for constrained Markov decision processes.
In Proceedings of the 2022 American Control Conference,
Atlanta, GA,
pages 2851-2856,
2022.
Keyword(s): Constrained Markov decision processes,
Policy gradient methods,
Primal-dual algorithms.
[bibtex-entry]
-
D. Ding,
X. Wei,
Z. Yang,
Z. Wang,
and M. R. Jovanovic.
Provably efficient safe exploration via primal-dual policy optimization.
In 24th International Conference on Artificial Intelligence and Statistics,
volume 130,
Virtual,
pages 3304-3312,
2021.
Keyword(s): Safe reinforcement learning,
Constrained Markov decision processes,
Safe exploration,
Proximal policy optimization,
Non-convex optimization,
Online mirror descent,
Primal-dual method.
[bibtex-entry]
-
D. Ding,
K. Zhang,
T. Basar,
and M. R. Jovanovic.
Natural policy gradient primal-dual method for constrained Markov decision processes.
In Proceedings of the 34th Conference on Neural Information Processing Systems,
volume 33,
Vancouver, Canada,
pages 8378-8390,
2020.
Keyword(s): Constrained Markov decision processes,
Constrained nonconvex optimization,
Natural policy gradient,
Policy gradient methods,
Primal-dual algorithms.
[bibtex-entry]
Back to MJ's Publications
Disclaimer:
This material is presented to ensure timely dissemination of
scholarly and technical work. Copyright and all rights therein
are retained by authors or by other copyright holders.
All person copying this information are expected to adhere to
the terms and constraints invoked by each author's copyright.
In most cases, these works may not be reposted
without the explicit permission of the copyright holder.
Last modified: Tue Jan 23 11:32:51 2024
Author: mihailo.
This document was translated from BibTEX by
bibtex2html