Working in high-dimensional latent spaces, the internal encoding of data in Variational Autoencoders becomes naturally sparse. We discuss this known but controversial phenomenon, sometimes referred to as overpruning, to emphasize the under-use of the model capacity. In fact, it is an important form of self-regularization, with all the typical benefits associated with sparsity: it forces the model to focus on the really important features, enhancing their disentanglement and reducing the risk of overfitting. Especially, it is a major methodological guide for the correct tuning of the model capacity, progressively augmenting it to attain sparsity, or conversely reducing the dimension of the network removing links to zeroed out neurons.
Titolo: | Sparsity in Variational Autoencoders |
Autore/i: | A. Asperti |
Autore/i Unibo: | |
Anno: | 2019 |
Titolo del libro: | Proceedings of the first International Conference on Advances in Signal Processing and Artificial Intelligence |
Pagina iniziale: | 18 |
Pagina finale: | 22 |
Abstract: | Working in high-dimensional latent spaces, the internal encoding of data in Variational Autoencoders becomes naturally sparse. We discuss this known but controversial phenomenon, sometimes referred to as overpruning, to emphasize the under-use of the model capacity. In fact, it is an important form of self-regularization, with all the typical benefits associated with sparsity: it forces the model to focus on the really important features, enhancing their disentanglement and reducing the risk of overfitting. Especially, it is a major methodological guide for the correct tuning of the model capacity, progressively augmenting it to attain sparsity, or conversely reducing the dimension of the network removing links to zeroed out neurons. |
Data stato definitivo: | 29-apr-2019 |
Appare nelle tipologie: | 4.01 Contributo in Atti di convegno |