In the framework of tensor spaces, we consider orthogonalization algorithms to generate an orthogonal basis of a tensor subspace from a set of linearly independent tensors. All variants, except for the Householder transformation, are straightforward extensions of well-known algorithms in matrix computation to tensors. In particular, we experimentally study the loss of orthogonality of six orthogonalization methods: Classical and Modified Gram-Schmidt with (CGS2, MGS2) and without (CGS, MGS) re-orthogonalization, the Cholesky-QR, and the Householder transformation. To overcome the curse of dimensionality, we represent tensors with a low-rank approximation using the Tensor Train (TT) formalism. Additionally, we introduce recompression steps in the standard algorithm outline through the TT-round method at a prescribed accuracy. After describing the structure and properties of the algorithms, we illustrate their loss of orthogonality with numerical experiments. Although no formal proof exists at this time, we observe very clearly that the well-established properties verified over decades of research by the round error analysis community in matrix computation appear to extend to the case of low-rank tensors, with the unit round-off replaced by the TT-round accuracy. The computational analysis for each orthogonalization scheme in terms of memory requirements and computational complexity, measured as a function of the number of TT-round operations, which happens to be the most computationally expensive operation, completes the study.

Coulaud, O., Giraud, L., Iannacito, M. (2025). On some orthogonalization schemes in Tensor Train format. BIT, 65(4), 1-30 [10.1007/s10543-025-01086-5].

On some orthogonalization schemes in Tensor Train format

Iannacito, Martina
2025

Abstract

In the framework of tensor spaces, we consider orthogonalization algorithms to generate an orthogonal basis of a tensor subspace from a set of linearly independent tensors. All variants, except for the Householder transformation, are straightforward extensions of well-known algorithms in matrix computation to tensors. In particular, we experimentally study the loss of orthogonality of six orthogonalization methods: Classical and Modified Gram-Schmidt with (CGS2, MGS2) and without (CGS, MGS) re-orthogonalization, the Cholesky-QR, and the Householder transformation. To overcome the curse of dimensionality, we represent tensors with a low-rank approximation using the Tensor Train (TT) formalism. Additionally, we introduce recompression steps in the standard algorithm outline through the TT-round method at a prescribed accuracy. After describing the structure and properties of the algorithms, we illustrate their loss of orthogonality with numerical experiments. Although no formal proof exists at this time, we observe very clearly that the well-established properties verified over decades of research by the round error analysis community in matrix computation appear to extend to the case of low-rank tensors, with the unit round-off replaced by the TT-round accuracy. The computational analysis for each orthogonalization scheme in terms of memory requirements and computational complexity, measured as a function of the number of TT-round operations, which happens to be the most computationally expensive operation, completes the study.
2025
BIT
Coulaud, O., Giraud, L., Iannacito, M. (2025). On some orthogonalization schemes in Tensor Train format. BIT, 65(4), 1-30 [10.1007/s10543-025-01086-5].
Coulaud, Olivier; Giraud, Luc; Iannacito, Martina
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1027395
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact