This article introduces a novel framework for data-driven linear quadratic regulator (LQR) design. First, we introduce a reinforcement learning paradigm for on-policy data-driven LQR, where exploration and exploitation are simultaneously performed while guaranteeing robust stability of the whole closed-loop system encompassing the plant and the control/learning dynamics. Then, we propose model reference adaptive reinforcement learning (MR-ARL), a control architecture integrating tools from reinforcement learning (RL) and model reference adaptive control (MRAC). The approach is based on a variable reference model containing the currently identified value function. Then, an adaptive stabilizer is used to ensure convergence of the applied policy to the optimal one, convergence of the plant to the optimal reference model, and overall robust closed-loop stability. The proposed framework provides theoretical robustness guarantees against perturbations, such as measurement noise, plant nonlinearities, or slowly varying parameters. The effectiveness of the proposed architecture is showcased via realistic numerical simulations.

Borghesi, M., Bosso, A., Notarstefano, G. (2026). MR-ARL: Model Reference Adaptive Reinforcement Learning for Robustly Stable On-Policy Data-Driven LQR. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 71(2), 1129-1144 [10.1109/TAC.2025.3611155].

MR-ARL: Model Reference Adaptive Reinforcement Learning for Robustly Stable On-Policy Data-Driven LQR

Borghesi M.
Primo
;
Bosso A.
Secondo
;
Notarstefano G.
Ultimo
2026

Abstract

This article introduces a novel framework for data-driven linear quadratic regulator (LQR) design. First, we introduce a reinforcement learning paradigm for on-policy data-driven LQR, where exploration and exploitation are simultaneously performed while guaranteeing robust stability of the whole closed-loop system encompassing the plant and the control/learning dynamics. Then, we propose model reference adaptive reinforcement learning (MR-ARL), a control architecture integrating tools from reinforcement learning (RL) and model reference adaptive control (MRAC). The approach is based on a variable reference model containing the currently identified value function. Then, an adaptive stabilizer is used to ensure convergence of the applied policy to the optimal one, convergence of the plant to the optimal reference model, and overall robust closed-loop stability. The proposed framework provides theoretical robustness guarantees against perturbations, such as measurement noise, plant nonlinearities, or slowly varying parameters. The effectiveness of the proposed architecture is showcased via realistic numerical simulations.
2026
Borghesi, M., Bosso, A., Notarstefano, G. (2026). MR-ARL: Model Reference Adaptive Reinforcement Learning for Robustly Stable On-Policy Data-Driven LQR. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 71(2), 1129-1144 [10.1109/TAC.2025.3611155].
Borghesi, M.; Bosso, A.; Notarstefano, G.
File in questo prodotto:
File Dimensione Formato  
TAC_MR-ARL.pdf

embargo fino al 17/09/2027

Tipo: Postprint / Author's Accepted Manuscript (AAM) - versione accettata per la pubblicazione dopo la peer-review
Licenza: Licenza per accesso libero gratuito
Dimensione 2.18 MB
Formato Adobe PDF
2.18 MB Adobe PDF   Visualizza/Apri   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1042541
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact