We study quantum neural networks where the generated function is the expectation value of the sum of single-qubit observables across all qubits. In (Girardi et al., CMP (2025), it is proven that the probability distributions of such generated functions converge in distribution to a Gaussian process in the limit of infinite width for both untrained networks with randomly initialized parameters and trained networks. In this paper, we provide a quantitative proof of this convergence in terms of the Wasserstein distance of order 1. First, we establish an upper bound on the distance between the probability distribution of the function generated by any untrained network with finite width and the Gaussian process with the same covariance. This proof utilizes Stein’s method to estimate the Wasserstein distance of order 1. Next, we analyze the training dynamics of the network via gradient flow, proving an upper bound on the distance between the probability distribution of the function generated by the trained network and the corresponding Gaussian process. This proof is based on a quantitative upper bound on the maximum variation of a parameter during training. This bound implies that for sufficiently large widths, training occurs in the lazy regime, i.e., each parameter changes only by a small amount. While the convergence result of Girardi et al., CMP (2025) holds at a fixed training time, our upper bounds are uniform in time and hold even as t→∞.

Melchor Hernandez, A., Girardi, F., Pastorello, D., De Palma, G. (2025). Quantitative Convergence of Trained Quantum Neural Networks to a Gaussian Process. ANNALES HENRI POINCARE', published online October 2025, 1-57 [10.1007/s00023-025-01631-6].

Quantitative Convergence of Trained Quantum Neural Networks to a Gaussian Process

Melchor Hernandez, Anderson
;
Girardi, Filippo;Pastorello, Davide;De Palma, Giacomo
2025

Abstract

We study quantum neural networks where the generated function is the expectation value of the sum of single-qubit observables across all qubits. In (Girardi et al., CMP (2025), it is proven that the probability distributions of such generated functions converge in distribution to a Gaussian process in the limit of infinite width for both untrained networks with randomly initialized parameters and trained networks. In this paper, we provide a quantitative proof of this convergence in terms of the Wasserstein distance of order 1. First, we establish an upper bound on the distance between the probability distribution of the function generated by any untrained network with finite width and the Gaussian process with the same covariance. This proof utilizes Stein’s method to estimate the Wasserstein distance of order 1. Next, we analyze the training dynamics of the network via gradient flow, proving an upper bound on the distance between the probability distribution of the function generated by the trained network and the corresponding Gaussian process. This proof is based on a quantitative upper bound on the maximum variation of a parameter during training. This bound implies that for sufficiently large widths, training occurs in the lazy regime, i.e., each parameter changes only by a small amount. While the convergence result of Girardi et al., CMP (2025) holds at a fixed training time, our upper bounds are uniform in time and hold even as t→∞.
2025
Melchor Hernandez, A., Girardi, F., Pastorello, D., De Palma, G. (2025). Quantitative Convergence of Trained Quantum Neural Networks to a Gaussian Process. ANNALES HENRI POINCARE', published online October 2025, 1-57 [10.1007/s00023-025-01631-6].
Melchor Hernandez, Anderson; Girardi, Filippo; Pastorello, Davide; De Palma, Giacomo
File in questo prodotto:
File Dimensione Formato  
s00023-025-01631-6.pdf

accesso aperto

Tipo: Versione (PDF) editoriale / Version Of Record
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione (CCBY)
Dimensione 836.55 kB
Formato Adobe PDF
836.55 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1035494
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact