In this article we introduce the notion of Split Variational Autoencoder (SVAE), whose output x^ is obtained as a weighted sum σ⊙x1^+(1−σ)⊙x2^ of two generated images x1^,x2^ , and σ is a learned compositional map. The composing images x1^,x2^ , as well as the σ -map are automatically synthesized by the model. The network is trained as a usual Variational Autoencoder with a negative loglikelihood loss between training and reconstructed images. No additional loss is required for x1^,x2^ or σ , neither any form of human tuning. The decomposition is nondeterministic, but follows two main schemes, that we may roughly categorize as either “syntactic” or “semantic.” In the first case, the map tends to exploit the strong correlation between adjacent pixels, splitting the image in two complementary high frequency sub-images. In the second case, the map typically focuses on the contours of objects, splitting the image in interesting variations of its content, with more marked and distinctive features. In this case, according to empirical observations, the Fréchet Inception Distance (FID) of x1^ and x2^ is usually lower (hence better) than that of x^ , that clearly suffers from being the average of the former. In a sense, a SVAE forces the Variational Autoencoder to make choices, in contrast with its intrinsic tendency to average between alternatives with the aim to minimize the reconstruction loss towards a specific sample. According to the FID metric, our technique, tested on typical datasets such as Mnist, Cifar10 and CelebA, allows us to outperform all previous purely variational architectures (not relying on normalization flows).

Enhancing variational generation through self-decomposition

Asperti A.
;
2022

Abstract

In this article we introduce the notion of Split Variational Autoencoder (SVAE), whose output x^ is obtained as a weighted sum σ⊙x1^+(1−σ)⊙x2^ of two generated images x1^,x2^ , and σ is a learned compositional map. The composing images x1^,x2^ , as well as the σ -map are automatically synthesized by the model. The network is trained as a usual Variational Autoencoder with a negative loglikelihood loss between training and reconstructed images. No additional loss is required for x1^,x2^ or σ , neither any form of human tuning. The decomposition is nondeterministic, but follows two main schemes, that we may roughly categorize as either “syntactic” or “semantic.” In the first case, the map tends to exploit the strong correlation between adjacent pixels, splitting the image in two complementary high frequency sub-images. In the second case, the map typically focuses on the contours of objects, splitting the image in interesting variations of its content, with more marked and distinctive features. In this case, according to empirical observations, the Fréchet Inception Distance (FID) of x1^ and x2^ is usually lower (hence better) than that of x^ , that clearly suffers from being the average of the former. In a sense, a SVAE forces the Variational Autoencoder to make choices, in contrast with its intrinsic tendency to average between alternatives with the aim to minimize the reconstruction loss towards a specific sample. According to the FID metric, our technique, tested on typical datasets such as Mnist, Cifar10 and CelebA, allows us to outperform all previous purely variational architectures (not relying on normalization flows).
Asperti A.; Bugo L.; Filippini D.
File in questo prodotto:
File Dimensione Formato  
Enhancing_Variational_Generation_Through_Self-Decomposition.pdf

accesso aperto

Tipo: Versione (PDF) editoriale
Licenza: Creative commons
Dimensione 1.43 MB
Formato Adobe PDF
1.43 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/890948
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact