In this article, we highlight what appears to be major issue of Variational Autoencoders, evinced from an extensive experimentation with different network architectures and datasets: the variance of generated data is significantly lower than that of training data. Since generative models are usually evaluated with metrics such as the Frechet Inception Distance (FID) that compare the distributions of (features of) real versus generated images, the variance loss typically results in degraded scores. This problem is particularly relevant in a two stage setting, where we use a second VAE to sample in the latent space of the first VAE. The minor variance creates a mismatch between the actual distribution of latent variables and those generated by the second VAE, that hinders the beneficial effects of the second stage. Renormalizing the output of the second VAE towards the expected normal spherical distribution, we obtain a sudden burst in the quality of generated samples, as also testified in terms of FID.

Variance Loss in Variational Autoencoders / Asperti, Andrea. - STAMPA. - 12565:(2020), pp. 297-308. (Intervento presentato al convegno Machine Learning, Optimization, and Data Science. LOD 2020. tenutosi a Certosa di Pontignano, Siena, Italy nel July 19-23, 2020) [10.1007/978-3-030-64583-0_28].

Variance Loss in Variational Autoencoders

Asperti, Andrea
2020

Abstract

In this article, we highlight what appears to be major issue of Variational Autoencoders, evinced from an extensive experimentation with different network architectures and datasets: the variance of generated data is significantly lower than that of training data. Since generative models are usually evaluated with metrics such as the Frechet Inception Distance (FID) that compare the distributions of (features of) real versus generated images, the variance loss typically results in degraded scores. This problem is particularly relevant in a two stage setting, where we use a second VAE to sample in the latent space of the first VAE. The minor variance creates a mismatch between the actual distribution of latent variables and those generated by the second VAE, that hinders the beneficial effects of the second stage. Renormalizing the output of the second VAE towards the expected normal spherical distribution, we obtain a sudden burst in the quality of generated samples, as also testified in terms of FID.
2020
Machine Learning, Optimization, and Data Science. LOD 2020.
297
308
Variance Loss in Variational Autoencoders / Asperti, Andrea. - STAMPA. - 12565:(2020), pp. 297-308. (Intervento presentato al convegno Machine Learning, Optimization, and Data Science. LOD 2020. tenutosi a Certosa di Pontignano, Siena, Italy nel July 19-23, 2020) [10.1007/978-3-030-64583-0_28].
Asperti, Andrea
File in questo prodotto:
File Dimensione Formato  
Variance Loss in Variational Autoencoders.pdf

accesso aperto

Tipo: Postprint
Licenza: Licenza per accesso libero gratuito
Dimensione 3.18 MB
Formato Adobe PDF
3.18 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/796185
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? ND
social impact