Back to MJ's Publications

Publications of H. Mohammadi
Theses
  1. H. Mohammadi. Robustness of gradient methods for data-driven decision making. PhD thesis, University of Southern California, 2022. Keyword(s): Accelerated first-order algorithms, Control for optimization, Convergence rate, Convex optimization, Data-driven control, Gradient descent, Gradient-flow dynamics, Heavy-ball method, Integral quadratic constraints, Linear quadratic regulator, Model-free control, Nesterov's accelerated method, Nonconvex optimization, Nonnormal dynamics, Noise amplification, Optimization, Optimal control, Polyak-Lojasiewicz inequality, Random search method, Reinforcement learning, Sample complexity, Second-order moments, Transient growth. [bibtex-entry]


Journal articles
  1. H. Mohammadi, M. Razaviyayn, and M. R. Jovanovic. Tradeoffs between convergence rate and noise amplification for momentum-based accelerated optimization algorithms. IEEE Trans. Automat. Control, 2024. Note: Doi:10.1109/TAC.2024.3453656. Keyword(s): Accelerated first-order algorithms, Control for optimization, Convergence rate, Convex optimization, Gradient descent, Fundamental limitations, Heavy-ball method, Nesterov's accelerated method, Nonnormal dynamics, Noise amplification, Second-order moments. [bibtex-entry]


  2. H. Mohammadi, M. Tinati, S. Tu, M. Soltanolkotabi, and M. R. Jovanovic. Stability properties of gradient flow dynamics for the symmetric low-rank matrix factorization problem. IEEE Control Syst. Lett., 2024. Note: Submitted. Keyword(s): Low rank matrix factorization, Nonconvex optimization, Stability of nonlinear systems, Gradient flow dynamics. [bibtex-entry]


  3. H. Mohammadi, S. Samuelson, and M. R. Jovanovic. Transient growth of accelerated optimization algorithms. IEEE Trans. Automat. Control, 68(3):1823-1830, March 2023. Keyword(s): Accelerated first-order algorithms, Control for optimization, Convex optimization, Gradient descent, Heavy-ball method, Integral quadratic constraints, Nesterov's accelerated method, Nonnormal dynamics, Transient growth. [bibtex-entry]


  4. I. K. Ozaslan, H. Mohammadi, and M. R. Jovanovic. Computing stabilizing feedback gains via a model-free policy gradient method. IEEE Control Syst. Lett., 7:407-412, July 2023. Keyword(s): Data-driven control, Gradient descent, Linear quadratic regulator, Model-free control, Nonconvex optimization, Optimization, Optimal control, Random search method, Reinforcement learning, Sample complexity. [bibtex-entry]


  5. H. Mohammadi, A. Zare, M. Soltanolkotabi, and M. R. Jovanovic. Convergence and sample complexity of gradient methods for the model-free linear-quadratic regulator problem. IEEE Trans. Automat. Control, 67(5):2435-2450, May 2022. Keyword(s): Data-driven control, Gradient descent, Gradient-flow dynamics, Linear quadratic regulator, Model-free control, Nonconvex optimization, Optimization, Optimal control, Polyak-Lojasiewicz inequality, Random search method, Reinforcement learning, Sample complexity. [bibtex-entry]


  6. H. Mohammadi, M. Razaviyayn, and M. R. Jovanovic. Robustness of accelerated first-order algorithms for strongly convex optimization problems. IEEE Trans. Automat. Control, 66(6):2480-2495, June 2021. Keyword(s): Accelerated first-order algorithms, Consensus networks, Control for optimization, Convex optimization, Integral quadratic constraints, Linear matrix inequalities, Noise amplification, Second-order moments, Semidefinite programming. [bibtex-entry]


  7. H. Mohammadi, M. Soltanolkotabi, and M. R. Jovanovic. On the linear convergence of random search for discrete-time LQR. IEEE Control Syst. Lett., 5(3):989-994, July 2021. Keyword(s): Data-driven control, Gradient descent, Linear quadratic regulator, Model-free control, Nonconvex optimization, Optimization, Optimal control, Random search method, Reinforcement learning, Sample complexity. [bibtex-entry]


  8. A. Zare, H. Mohammadi, N. K. Dhingra, T. T. Georgiou, and M. R. Jovanovic. Proximal algorithms for large-scale statistical modeling and sensor/actuator selection. IEEE Trans. Automat. Control, 65(8):3441-3456, August 2020. Keyword(s): Actuator selection, Augmented Lagrangian, Convex optimization, Low-rank perturbation, Matrix completion problem, Method of multipliers, Non-smooth optimization, Proximal algorithms, Regularization for design, Sensor selection, Sparsity-promoting optimal control, Structured covariances. [bibtex-entry]


Conference articles
  1. H. Mohammadi, M. Razaviyayn, and M. R. Jovanovic. Noise amplification of momentum-based optimization algorithms. In Proceedings of the 2023 American Control Conference, San Diego, CA, pages 849-854, 2023. Keyword(s): Accelerated first-order algorithms, Control for optimization, Convergence rate, Convex optimization, Gradient descent, Heavy-ball method, Nesterov's accelerated method, Noise amplification, Nonnormal dynamics, Two-step momentum algorithm. [bibtex-entry]


  2. S. Samuelson, H. Mohammadi, and M. R. Jovanovic. Performance of noisy higher-order accelerated gradient flow dynamics for strongly convex quadratic optimization problems. In Proceedings of the 2023 American Control Conference, San Diego, CA, pages 3839-3844, 2023. Keyword(s): Accelerated first-order algorithms, Control for optimization, Convergence rate, Convex optimization, Gradient flow dynamics, Noise amplification, Nonnormal dynamics, Two-step momentum algorithm. [bibtex-entry]


  3. S. Samuelson, H. Mohammadi, and M. R. Jovanovic. Performance of noisy three-step accelerated first-order optimization algorithms for strongly convex quadratic problems. In Proceedings of the 62nd IEEE Conference on Decision and Control, Singapore, pages 1300-1305, 2023. Keyword(s): Accelerated first-order algorithms, Control for optimization, Convergence rate, Convex optimization, Gradient flow dynamics, Noise amplification, Nonnormal dynamics, Three-step momentum algorithm. [bibtex-entry]


  4. H. Mohammadi and M. R. Jovanovic. On the noise amplification of primal-dual gradient flow dynamics based on proximal augmented Lagrangian. In Proceedings of the 2022 American Control Conference, Atlanta, GA, pages 926-931, 2022. Keyword(s): Control for optimization, Convex Optimization, Integral quadratic constraints, Linear matrix inequalities, Noise amplification, Non-smooth optimization, Proximal algorithms, Primal-dual gradient flow dynamics, Primal-dual methods, Proximal augmented Lagrangian, Second-order moments, Semidefinite programming. [bibtex-entry]


  5. H. Mohammadi, M. Soltanolkotabi, and M. R. Jovanovic. On the lack of gradient domination for linear quadratic Gaussian problems with incomplete state information. In Proceedings of the 60th IEEE Conference on Decision and Control, Austin, TX, pages 1120-1124, 2021. Keyword(s): Data-driven control, Gradient descent, Gradient-flow dynamics, Model-free control, Nonconvex optimization, Optimization, Optimal control, Polyak-Lojasiewicz inequality, Random search method, Reinforcement learning, Sample complexity. [bibtex-entry]


  6. H. Mohammadi, M. Soltanolkotabi, and M. R. Jovanovic. Learning the model-free linear quadratic regulator via random search. In Proceedings of Machine Learning Research, 2nd Annual Conference on Learning for Dynamics and Control, volume 120, Berkeley, CA, pages 1-9, 2020. Keyword(s): Data-driven control, Gradient descent, Gradient-flow dynamics, Linear quadratic regulator, Model-free control, Nonconvex optimization, Optimization, Optimal control, Polyak-Lojasiewicz inequality, Random search method, Reinforcement learning, Sample complexity. [bibtex-entry]


  7. H. Mohammadi, M. Soltanolkotabi, and M. R. Jovanovic. Random search for learning the linear quadratic regulator. In Proceedings of the 2020 American Control Conference, Denver, CO, pages 4798-4803, 2020. Keyword(s): Data-driven control, Gradient descent, Gradient-flow dynamics, Linear quadratic regulator, Model-free control, Nonconvex optimization, Optimization, Optimal control, Polyak-Lojasiewicz inequality, Random search method, Reinforcement learning, Sample complexity. [bibtex-entry]


  8. S. Samuelson, H. Mohammadi, and M. R. Jovanovic. On the transient growth of Nesterov's accelerated method for strongly convex optimization problems. In Proceedings of the 59th IEEE Conference on Decision and Control, Jeju Island, Republic of Korea, pages 5911-5916, 2020. Note: (Invited paper). Keyword(s): Accelerated first-order algorithms, Control for optimization, Convex optimization, Gradient descent, Integral quadratic constraints, Nesterov's accelerated method, Nonnormal dynamics, Transient growth. [bibtex-entry]


  9. S. Samuelson, H. Mohammadi, and M. R. Jovanovic. Transient growth of accelerated first-order methods. In Proceedings of the 2020 American Control Conference, Denver, CO, pages 2858-2863, 2020. Keyword(s): Accelerated first-order algorithms, Control for optimization, Convex optimization, Gradient descent, Transient growth. [bibtex-entry]


  10. H. Mohammadi, M. Razaviyayn, and M. R. Jovanovic. Performance of noisy Nesterov's accelerated method for strongly convex optimization problems. In Proceedings of the 2019 American Control Conference, Philadelphia, PA, pages 3426-3431, 2019. Keyword(s): Accelerated first-order algorithms, Control for optimization, Convex optimization, Integral quadratic constraints, Linear matrix inequalities, Noise amplification, Second-order moments, Semidefinite programming. [bibtex-entry]


  11. H. Mohammadi, A. Zare, M. Soltanolkotabi, and M. R. Jovanovic. Global exponential convergence of gradient methods over the nonconvex landscape of the linear quadratic regulator. In Proceedings of the 58th IEEE Conference on Decision and Control, Nice, France, pages 7474-7479, 2019. Keyword(s): Data-driven control, Global exponential stability, Gradient descent, Gradient-flow dynamics, Model-free control, Nonconvex optimization, Optimization, Optimal control, Reinforcement learning. [bibtex-entry]


  12. H. Mohammadi, M. Razaviyayn, and M. R. Jovanovic. On the stability of gradient flow dynamics for a rank-one matrix approximation problem. In Proceedings of the 2018 American Control Conference, Milwaukee, WI, pages 4533-4538, 2018. Keyword(s): Nonconvex optimization, Stability of nonlinear systems, Matrix approximation, Gradient flow dynamics. [bibtex-entry]


  13. H. Mohammadi, M. Razaviyayn, and M. R. Jovanovic. Variance amplification of accelerated first-order algorithms for strongly convex quadratic optimization problems. In Proceedings of the 57th IEEE Conference on Decision and Control, Miami, FL, pages 5753-5758, 2018. Keyword(s): Accelerated optimization algorithms, Control for optimization, Input-output analysis, Large-scale networks, Fundamental limitations, Robustness, Variance amplifications. [bibtex-entry]


Book chapters
  1. H. Mohammadi, M. Soltanolkotabi, and M. R. Jovanovic. Model-free linear quadratic regulator. In K. G. Vamvoudakis, Y. Wan, F. Lewis, and D. Cansever, editors, Handbook of Reinforcement Learning and Control. Springer International Publishing, 2021. Note: Doi:10.1007/978-3-030-60990-0. Keyword(s): Data-driven control, Gradient descent, Gradient-flow dynamics, Linear quadratic regulator, Model-free control, Nonconvex optimization, Optimization, Optimal control, Polyak-Lojasiewicz inequality, Random search method, Reinforcement learning, Sample complexity. [bibtex-entry]



Back to MJ's Publications




Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.




Last modified: Sat Oct 5 22:00:41 2024
Author: mihailo.


This document was translated from BibTEX by bibtex2html