Within a robotic context, the techniques of passivity-based control and reinforcement learning are merged with the goal of eliminating some of their reciprocal weaknesses, as well as inducing novel promising features in the resulting framework. The contribution is framed in a scenario where passivity-based control is implemented by means of virtual energy tanks, a control technique developed to achieve closed-loop passivity for any arbitrary control input. Albeit the latter result is heavily used, it is discussed why its practical application at its current stage remains rather limited, which makes contact with the highly debated claim that passivity-based techniques are associated with a loss of performance. The use of reinforcement learning allows to learn a control policy that can be passivized using the energy tank architecture, combining the versatility of learning approaches and the system theoretic properties which can be inferred due to the energy tanks. Simulations show the validity of the approach, as well as novel interesting research directions in energy-aware robotics.In this work, a control framework is introduced where the versatility of learning approaches and the system's passivity property derived by means of virtual energy tanks are combined in order to eliminate some of their reciprocal weaknessesimage

Zanella, R., Palli, G., Stramigioli, S., Califano, F. (2024). Learning passive policies with virtual energy tanks in robotics. IET CONTROL THEORY & APPLICATIONS, 18(5), 541-550 [10.1049/cth2.12558].

Learning passive policies with virtual energy tanks in robotics

Zanella, R
;
Palli, G;
2024

Abstract

Within a robotic context, the techniques of passivity-based control and reinforcement learning are merged with the goal of eliminating some of their reciprocal weaknesses, as well as inducing novel promising features in the resulting framework. The contribution is framed in a scenario where passivity-based control is implemented by means of virtual energy tanks, a control technique developed to achieve closed-loop passivity for any arbitrary control input. Albeit the latter result is heavily used, it is discussed why its practical application at its current stage remains rather limited, which makes contact with the highly debated claim that passivity-based techniques are associated with a loss of performance. The use of reinforcement learning allows to learn a control policy that can be passivized using the energy tank architecture, combining the versatility of learning approaches and the system theoretic properties which can be inferred due to the energy tanks. Simulations show the validity of the approach, as well as novel interesting research directions in energy-aware robotics.In this work, a control framework is introduced where the versatility of learning approaches and the system's passivity property derived by means of virtual energy tanks are combined in order to eliminate some of their reciprocal weaknessesimage
2024
Zanella, R., Palli, G., Stramigioli, S., Califano, F. (2024). Learning passive policies with virtual energy tanks in robotics. IET CONTROL THEORY & APPLICATIONS, 18(5), 541-550 [10.1049/cth2.12558].
Zanella, R; Palli, G; Stramigioli, S; Califano, F
File in questo prodotto:
File Dimensione Formato  
cth212558_LR.pdf

accesso aperto

Tipo: Versione (PDF) editoriale
Licenza: Creative commons
Dimensione 651.19 kB
Formato Adobe PDF
651.19 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/971748
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact