Embedding of large but redundant data, such as images or text, in a hierarchy of lower-dimensional spaces is one of the key features of representation learning approaches, which nowadays provide state-of-the-art solutions to problems once believed hard or impossible to solve. In this work, in a plot twist with a strong meta aftertaste, we show how trained deep models are as redundant as the data they are optimized to process, and how it is therefore possible to use deep learning models to embed deep learning models. In particular, we show that it is possible to use representation learning to learn a fixed-size, low-dimensional embedding space of trained deep models and that such space can be explored by interpolation or optimization to attain ready-to-use models. We find that it is possible to learn an embedding space of multiple instances of the same architecture and of multiple architectures. We address image classification and neural representation of signals, showing how our embedding space can be learnt so as to capture the notions of performance and 3D shape, respectively. In the Multi-Architecture setting we also show how an embedding trained only on a subset of architectures can learn to generate already-trained instances of architectures it never sees instantiated at training time.

Gianluca Berardi, L.D.L. (2022). Learning the Space of Deep Models. New York : IEEE Computer Society [10.1109/ICPR56361.2022.9956085].

Learning the Space of Deep Models

Gianluca Berardi
Primo
;
Luca De Luigi
Secondo
;
Samuele Salti
Penultimo
;
Luigi Di Stefano
Ultimo
2022

Abstract

Embedding of large but redundant data, such as images or text, in a hierarchy of lower-dimensional spaces is one of the key features of representation learning approaches, which nowadays provide state-of-the-art solutions to problems once believed hard or impossible to solve. In this work, in a plot twist with a strong meta aftertaste, we show how trained deep models are as redundant as the data they are optimized to process, and how it is therefore possible to use deep learning models to embed deep learning models. In particular, we show that it is possible to use representation learning to learn a fixed-size, low-dimensional embedding space of trained deep models and that such space can be explored by interpolation or optimization to attain ready-to-use models. We find that it is possible to learn an embedding space of multiple instances of the same architecture and of multiple architectures. We address image classification and neural representation of signals, showing how our embedding space can be learnt so as to capture the notions of performance and 3D shape, respectively. In the Multi-Architecture setting we also show how an embedding trained only on a subset of architectures can learn to generate already-trained instances of architectures it never sees instantiated at training time.
2022
2022 26th International Conference on Pattern Recognition (ICPR)
2482
2488
Gianluca Berardi, L.D.L. (2022). Learning the Space of Deep Models. New York : IEEE Computer Society [10.1109/ICPR56361.2022.9956085].
Gianluca Berardi, Luca De Luigi, Samuele Salti, Luigi Di Stefano
File in questo prodotto:
File Dimensione Formato  
paper.pdf

accesso aperto

Tipo: Postprint
Licenza: Licenza per accesso libero gratuito
Dimensione 864.12 kB
Formato Adobe PDF
864.12 kB Adobe PDF Visualizza/Apri
supplementary.pdf

accesso aperto

Tipo: File Supplementare
Licenza: Licenza per accesso libero gratuito
Dimensione 2.37 MB
Formato Adobe PDF
2.37 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/905514
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact