Back to MJ's Publications

Publications about 'Reinforcement learning'
Journal articles
  1. H. Mohammadi, M. Soltanolkotabi, and M. R. Jovanovic. On the linear convergence of random search for discrete-time LQR. IEEE Control Syst. Lett., 5(3):989-994, July 2021. Keyword(s): Data-driven control, Gradient descent, Linear quadratic regulator, Model-free control, Nonconvex optimization, Optimization, Optimal control, Random search method, Reinforcement learning, Sample complexity. [bibtex-entry]


  2. H. Mohammadi, A. Zare, M. Soltanolkotabi, and M. R. Jovanovic. Convergence and sample complexity of gradient methods for the model-free linear quadratic regulator problem. IEEE Trans. Automat. Control, 2021. Note: Doi:10.1109/TAC.2021.3087455; also arXiv:1912.11899. Keyword(s): Data-driven control, Gradient descent, Gradient-flow dynamics, Linear quadratic regulator, Model-free control, Nonconvex optimization, Optimization, Optimal control, Polyak-Lojasiewicz inequality, Random search method, Reinforcement learning, Sample complexity. [bibtex-entry]


  3. D. Ding, X. Wei, Z. Yang, Z. Wang, and M. R. Jovanovic. Fast multi-agent temporal-difference learning via homotopy stochastic primal-dual optimization. IEEE Trans. Automat. Control, 2020. Note: Submitted; also arXiv:1908.02805. Keyword(s): Convex optimization, Distributed temporal-difference learning, Multi-agent systems, Primal-dual algorithms, Reinforcement learning, Stochastic optimization. [bibtex-entry]


Conference articles
  1. D. Ding, X. Wei, Z. Yang, Z. Wang, and M. R. Jovanovic. Provably efficient safe exploration via primal-dual policy optimization. In 24th International Conference on Artificial Intelligence and Statistics, volume 130, Virtual, pages 3304-3312, 2021. Keyword(s): Safe reinforcement learning, Constrained Markov decision processes, Safe exploration, Proximal policy optimization, Non-convex optimization, Online mirror descent, Primal-dual method. [bibtex-entry]


  2. H. Mohammadi, M. Soltanolkotabi, and M. R. Jovanovic. On the lack of gradient domination for linear quadratic Gaussian problems with incomplete state information. In Proceedings of the 60th IEEE Conference on Decision and Control, Austin, TX, 2021. Note: Submitted. Keyword(s): Data-driven control, Gradient descent, Gradient-flow dynamics, Model-free control, Nonconvex optimization, Optimization, Optimal control, Polyak-Lojasiewicz inequality, Random search method, Reinforcement learning, Sample complexity. [bibtex-entry]


  3. H. Mohammadi, M. Soltanolkotabi, and M. R. Jovanovic. Learning the model-free linear quadratic regulator via random search. In Proceedings of Machine Learning Research, 2nd Annual Conference on Learning for Dynamics and Control, volume 120, Berkeley, CA, pages 1-9, 2020. Keyword(s): Data-driven control, Gradient descent, Gradient-flow dynamics, Linear quadratic regulator, Model-free control, Nonconvex optimization, Optimization, Optimal control, Polyak-Lojasiewicz inequality, Random search method, Reinforcement learning, Sample complexity. [bibtex-entry]


  4. H. Mohammadi, M. Soltanolkotabi, and M. R. Jovanovic. Random search for learning the linear quadratic regulator. In Proceedings of the 2020 American Control Conference, Denver, CO, pages 4798-4803, 2020. Keyword(s): Data-driven control, Gradient descent, Gradient-flow dynamics, Linear quadratic regulator, Model-free control, Nonconvex optimization, Optimization, Optimal control, Polyak-Lojasiewicz inequality, Random search method, Reinforcement learning, Sample complexity. [bibtex-entry]


  5. D. Ding, X. Wei, Z. Yang, Z. Wang, and M. R. Jovanovic. Fast multi-agent temporal-difference learning via homotopy stochastic primal-dual method. In Optimization Foundations for Reinforcement Learning Workshop, 33rd Conference on Neural Information Processing Systems, Vancouver, Canada, 2019. Keyword(s): Convex optimization, Distributed temporal-difference learning, Multi-agent systems, Primal-dual algorithms, Reinforcement learning, Stochastic optimization. [bibtex-entry]


  6. S. Hassan-Moghaddam, M. R. Jovanovic, and S. Meyn. Data-driven proximal algorithms for the design of structured optimal feedback gains. In Proceedings of the 2019 American Control Conference, Philadelphia, PA, pages 5846-5850, 2019. Keyword(s): Data-driven feedback design, Large-scale systems, Non-smooth optimization, Proximal algorithms, Reinforcement learning, Sparsity-promoting optimal control, Structured optimal control. [bibtex-entry]


  7. H. Mohammadi, A. Zare, M. Soltanolkotabi, and M. R. Jovanovic. Global exponential convergence of gradient methods over the nonconvex landscape of the linear quadratic regulator. In Proceedings of the 58th IEEE Conference on Decision and Control, Nice, France, pages 7474-7479, 2019. Keyword(s): Data-driven control, Global exponential stability, Gradient descent, Gradient-flow dynamics, Model-free control, Nonconvex optimization, Optimization, Optimal control, Reinforcement learning. [bibtex-entry]


Book chapters
  1. H. Mohammadi, M. Soltanolkotabi, and M. R. Jovanovic. Model-free linear quadratic regulator. In K. G. Vamvoudakis, Y. Wan, F. Lewis, and D. Cansever, editors, Handbook of Reinforcement Learning and Control. Springer International Publishing, 2021. Note: Doi:10.1007/978-3-030-60990-0. Keyword(s): Data-driven control, Gradient descent, Gradient-flow dynamics, Linear quadratic regulator, Model-free control, Nonconvex optimization, Optimization, Optimal control, Polyak-Lojasiewicz inequality, Random search method, Reinforcement learning, Sample complexity. [bibtex-entry]



Back to MJ's Publications




Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.




Last modified: Mon Jun 7 10:25:01 2021
Author: mihailo.


This document was translated from BibTEX by bibtex2html