Of all the possible projection methods for solving large-scale Lyapunov matrix equations, Galerkin approaches remain much more popular than minimal residual ones. This is mainly due to the different nature of the projected problems stemming from these two families of methods. While a Galerkin approach leads to the solution of a low-dimensional matrix equation per iteration, a matrix least-squares problem needs to be solved per iteration in a minimal residual setting. The significant computational cost of these least-squares problems has steered researchers towards Galerkin methods in spite of the appealing properties of minimal residual schemes. In this paper we introduce a framework that allows for modifying the Galerkin approach by low-rank, additive corrections to the projected matrix equation problem with the two-fold goal of attaining monotonic convergence rates similar to those of minimal residual schemes while maintaining essentially the same computational cost of the original Galerkin method. We analyze the well-posedness of our framework and determine possible scenarios where we expect the residual norm attained by two low-rank-modified variants to behave similarly to the one computed by a minimal residual technique. A panel of diverse numerical examples shows the behavior and potential of our new approach.
Lund, K., Palitta, D. (2024). Low-rank-modified Galerkin methods for the Lyapunov equation. ELECTRONIC TRANSACTIONS ON NUMERICAL ANALYSIS, 62, 1-21 [10.1553/etna_vol62s1].
Low-rank-modified Galerkin methods for the Lyapunov equation
Palitta, DavideMembro del Collaboration Group
2024
Abstract
Of all the possible projection methods for solving large-scale Lyapunov matrix equations, Galerkin approaches remain much more popular than minimal residual ones. This is mainly due to the different nature of the projected problems stemming from these two families of methods. While a Galerkin approach leads to the solution of a low-dimensional matrix equation per iteration, a matrix least-squares problem needs to be solved per iteration in a minimal residual setting. The significant computational cost of these least-squares problems has steered researchers towards Galerkin methods in spite of the appealing properties of minimal residual schemes. In this paper we introduce a framework that allows for modifying the Galerkin approach by low-rank, additive corrections to the projected matrix equation problem with the two-fold goal of attaining monotonic convergence rates similar to those of minimal residual schemes while maintaining essentially the same computational cost of the original Galerkin method. We analyze the well-posedness of our framework and determine possible scenarios where we expect the residual norm attained by two low-rank-modified variants to behave similarly to the one computed by a minimal residual technique. A panel of diverse numerical examples shows the behavior and potential of our new approach.File | Dimensione | Formato | |
---|---|---|---|
pp1-21.pdf
accesso aperto
Tipo:
Versione (PDF) editoriale
Licenza:
Licenza per accesso libero gratuito
Dimensione
1 MB
Formato
Adobe PDF
|
1 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.