Transfer learning (TL) is a well-established machine learning technique to boost the generalization performance on a specific (target) task using information gained from a related (source) task, and it crucially depends on the ability of a network to learn useful features. Leveraging recent analytical progress in the proportional regime of deep learning theory (i.e., the limit where the size of the training set P and the size of the hidden layers N are taken to infinity keeping their ratio alpha = P/N finite), in this Letter we develop a novel single-instance Franz-Parisi formalism that yields an effective theory for TL in fully connected neural networks. Unlike the (lazy-training) infinite-width limit, where TL is ineffective, we demonstrate that in the proportional limit TL occurs due to a renormalized source-target kernel that quantifies their relatedness and determines whether TL is beneficial for generalization.
Ingrosso, A., Pacelli, R., Rotondo, P., Gerace, F. (2025). Statistical Mechanics of Transfer Learning in Fully Connected Networks in the Proportional Limit. PHYSICAL REVIEW LETTERS, 134(17), 1-9 [10.1103/PhysRevLett.134.177301].
Statistical Mechanics of Transfer Learning in Fully Connected Networks in the Proportional Limit
Gerace F.
2025
Abstract
Transfer learning (TL) is a well-established machine learning technique to boost the generalization performance on a specific (target) task using information gained from a related (source) task, and it crucially depends on the ability of a network to learn useful features. Leveraging recent analytical progress in the proportional regime of deep learning theory (i.e., the limit where the size of the training set P and the size of the hidden layers N are taken to infinity keeping their ratio alpha = P/N finite), in this Letter we develop a novel single-instance Franz-Parisi formalism that yields an effective theory for TL in fully connected neural networks. Unlike the (lazy-training) infinite-width limit, where TL is ineffective, we demonstrate that in the proportional limit TL occurs due to a renormalized source-target kernel that quantifies their relatedness and determines whether TL is beneficial for generalization.| File | Dimensione | Formato | |
|---|---|---|---|
|
Statistical-Mechanics-of-Transfer-Learning-in-Fully-Connected-Networks-in-the-Proportional-Limit.pdf
accesso aperto
Tipo:
Versione (PDF) editoriale / Version Of Record
Licenza:
Licenza per accesso libero gratuito
Dimensione
726.98 kB
Formato
Adobe PDF
|
726.98 kB | Adobe PDF | Visualizza/Apri |
|
supplementary_material_1.pdf
accesso aperto
Tipo:
File Supplementare
Licenza:
Licenza per accesso libero gratuito
Dimensione
382.09 kB
Formato
Adobe PDF
|
382.09 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


