Back to MJ's Publications

Publications about 'Reinforcement learning'
Theses
  1. D. Ding. Provable reinforcement learning for constrained and multi-agent control systems. PhD thesis, University of Southern California, 2022. Keyword(s): Constrained Markov decision processes, Constrained nonconvex optimization, Function approximation, Game-agnostic convergence, Multi-agent reinforcement learning, Multi-agent systems, Natural policy gradient, Policy gradient methods, Proximal policy optimization, Primal-dual algorithms, Reinforcement learning, Safe exploration, Safe reinforcement learning, Sample complexity, Stochastic optimization. [bibtex-entry]


  2. H. Mohammadi. Robustness of gradient methods for data-driven decision making. PhD thesis, University of Southern California, 2022. Keyword(s): Accelerated first-order algorithms, Control for optimization, Convergence rate, Convex optimization, Data-driven control, Gradient descent, Gradient-flow dynamics, Heavy-ball method, Integral quadratic constraints, Linear quadratic regulator, Model-free control, Nesterov's accelerated method, Nonconvex optimization, Nonnormal dynamics, Noise amplification, Optimization, Optimal control, Polyak-Lojasiewicz inequality, Random search method, Reinforcement learning, Sample complexity, Second-order moments, Transient growth. [bibtex-entry]


Journal articles
  1. I. K. Ozaslan, H. Mohammadi, and M. R. Jovanovic. Computing stabilizing feedback gains via a model-free policy gradient method. IEEE Control Syst. Lett., 7:407-412, July 2023. Keyword(s): Data-driven control, Gradient descent, Linear quadratic regulator, Model-free control, Nonconvex optimization, Optimization, Optimal control, Random search method, Reinforcement learning, Sample complexity. [bibtex-entry]


  2. H. Mohammadi, A. Zare, M. Soltanolkotabi, and M. R. Jovanovic. Convergence and sample complexity of gradient methods for the model-free linear-quadratic regulator problem. IEEE Trans. Automat. Control, 67(5):2435-2450, May 2022. Keyword(s): Data-driven control, Gradient descent, Gradient-flow dynamics, Linear quadratic regulator, Model-free control, Nonconvex optimization, Optimization, Optimal control, Polyak-Lojasiewicz inequality, Random search method, Reinforcement learning, Sample complexity. [bibtex-entry]


  3. H. Mohammadi, M. Soltanolkotabi, and M. R. Jovanovic. On the linear convergence of random search for discrete-time LQR. IEEE Control Syst. Lett., 5(3):989-994, July 2021. Keyword(s): Data-driven control, Gradient descent, Linear quadratic regulator, Model-free control, Nonconvex optimization, Optimization, Optimal control, Random search method, Reinforcement learning, Sample complexity. [bibtex-entry]


Conference articles
  1. D. Ding, X. Wei, Z. Yang, Z. Wang, and M. R. Jovanovic. Provably efficient generalized Lagrangian policy optimization for safe multi-agent reinforcement learning. In Proceedings of 5th Annual Conference on Learning for Dynamics and Control, volume 211 of Proceedings of Machine Learning Research, Philadelphia, PA, pages 315-332, 2023. Keyword(s): Constrained Markov games, Method of Lagrange multipliers, Minimax optimization, Multi-agent reinforcement learning, Primal-dual policy optimization. [bibtex-entry]


  2. D. Ding, C.-Y. Wei, K. Zhang, and M. R. Jovanovic. Independent policy gradient for large-scale Markov potential games: sharper rates, function approximation, and game-agnostic convergence. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, Baltimore, MD, pages 5166-5220, 2022. Keyword(s): Multi-agent reinforcement learning, Independent reinforcement learning, Policy gradient methods, Markov potential games, Function approximation, Game-agnostic convergence. [bibtex-entry]


  3. D. Ding, X. Wei, Z. Yang, Z. Wang, and M. R. Jovanovic. Provably efficient safe exploration via primal-dual policy optimization. In 24th International Conference on Artificial Intelligence and Statistics, volume 130, Virtual, pages 3304-3312, 2021. Keyword(s): Safe reinforcement learning, Constrained Markov decision processes, Safe exploration, Proximal policy optimization, Non-convex optimization, Online mirror descent, Primal-dual method. [bibtex-entry]


  4. H. Mohammadi, M. Soltanolkotabi, and M. R. Jovanovic. On the lack of gradient domination for linear quadratic Gaussian problems with incomplete state information. In Proceedings of the 60th IEEE Conference on Decision and Control, Austin, TX, pages 1120-1124, 2021. Keyword(s): Data-driven control, Gradient descent, Gradient-flow dynamics, Model-free control, Nonconvex optimization, Optimization, Optimal control, Polyak-Lojasiewicz inequality, Random search method, Reinforcement learning, Sample complexity. [bibtex-entry]


  5. H. Mohammadi, M. Soltanolkotabi, and M. R. Jovanovic. Learning the model-free linear quadratic regulator via random search. In Proceedings of Machine Learning Research, 2nd Annual Conference on Learning for Dynamics and Control, volume 120, Berkeley, CA, pages 1-9, 2020. Keyword(s): Data-driven control, Gradient descent, Gradient-flow dynamics, Linear quadratic regulator, Model-free control, Nonconvex optimization, Optimization, Optimal control, Polyak-Lojasiewicz inequality, Random search method, Reinforcement learning, Sample complexity. [bibtex-entry]


  6. H. Mohammadi, M. Soltanolkotabi, and M. R. Jovanovic. Random search for learning the linear quadratic regulator. In Proceedings of the 2020 American Control Conference, Denver, CO, pages 4798-4803, 2020. Keyword(s): Data-driven control, Gradient descent, Gradient-flow dynamics, Linear quadratic regulator, Model-free control, Nonconvex optimization, Optimization, Optimal control, Polyak-Lojasiewicz inequality, Random search method, Reinforcement learning, Sample complexity. [bibtex-entry]


  7. D. Ding, X. Wei, Z. Yang, Z. Wang, and M. R. Jovanovic. Fast multi-agent temporal-difference learning via homotopy stochastic primal-dual method. In Optimization Foundations for Reinforcement Learning Workshop, 33rd Conference on Neural Information Processing Systems, Vancouver, Canada, 2019. Keyword(s): Convex optimization, Distributed temporal-difference learning, Multi-agent systems, Primal-dual algorithms, Reinforcement learning, Stochastic optimization. [bibtex-entry]


  8. S. Hassan-Moghaddam, M. R. Jovanovic, and S. Meyn. Data-driven proximal algorithms for the design of structured optimal feedback gains. In Proceedings of the 2019 American Control Conference, Philadelphia, PA, pages 5846-5850, 2019. Keyword(s): Data-driven feedback design, Large-scale systems, Non-smooth optimization, Proximal algorithms, Reinforcement learning, Sparsity-promoting optimal control, Structured optimal control. [bibtex-entry]


  9. H. Mohammadi, A. Zare, M. Soltanolkotabi, and M. R. Jovanovic. Global exponential convergence of gradient methods over the nonconvex landscape of the linear quadratic regulator. In Proceedings of the 58th IEEE Conference on Decision and Control, Nice, France, pages 7474-7479, 2019. Keyword(s): Data-driven control, Global exponential stability, Gradient descent, Gradient-flow dynamics, Model-free control, Nonconvex optimization, Optimization, Optimal control, Reinforcement learning. [bibtex-entry]


Book chapters
  1. H. Mohammadi, M. Soltanolkotabi, and M. R. Jovanovic. Model-free linear quadratic regulator. In K. G. Vamvoudakis, Y. Wan, F. Lewis, and D. Cansever, editors, Handbook of Reinforcement Learning and Control. Springer International Publishing, 2021. Note: Doi:10.1007/978-3-030-60990-0. Keyword(s): Data-driven control, Gradient descent, Gradient-flow dynamics, Linear quadratic regulator, Model-free control, Nonconvex optimization, Optimization, Optimal control, Polyak-Lojasiewicz inequality, Random search method, Reinforcement learning, Sample complexity. [bibtex-entry]



Back to MJ's Publications




Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.




Last modified: Sat Oct 5 22:00:41 2024
Author: mihailo.


This document was translated from BibTEX by bibtex2html