One-class learning is a classical and hard computational intelligence task. In the literature, there are some effective and powerful solutions to address the problem. There are examples in the kernel machines realm, Support Vector Domain Description, and the recently proposed Import Vector Domain Description (IVDD), which directly delivers the sample probability of belonging to the class. Here, we propose and discuss two optimization techniques for IVDD to significantly improve the memory footprint and consequently to scale to datasets that are larger than the original formulation. We propose two strategies. First, we propose using random features to approximate the gaussian kernel together with a primal optimization algorithm. Second, we propose a Nyström-like approximation of the functional together with a fast converging and accurate self-consistent algorithm. In particular, we replace the a posteriori sparsity of the original optimization method of IVDD by randomly selecting a priori landmark samples in the dataset. We find this second approximation to be superior. Compared to the original IVDD with the RBF kernel, it achieves high accuracy, is much faster, and grants huge memory savings.

Decherchi S., Cavalli A. (2020). Fast and Memory-Efficient Import Vector Domain Description. NEURAL PROCESSING LETTERS, 52(1), 511-524 [10.1007/s11063-020-10243-6].

Fast and Memory-Efficient Import Vector Domain Description

Cavalli A.
2020

Abstract

One-class learning is a classical and hard computational intelligence task. In the literature, there are some effective and powerful solutions to address the problem. There are examples in the kernel machines realm, Support Vector Domain Description, and the recently proposed Import Vector Domain Description (IVDD), which directly delivers the sample probability of belonging to the class. Here, we propose and discuss two optimization techniques for IVDD to significantly improve the memory footprint and consequently to scale to datasets that are larger than the original formulation. We propose two strategies. First, we propose using random features to approximate the gaussian kernel together with a primal optimization algorithm. Second, we propose a Nyström-like approximation of the functional together with a fast converging and accurate self-consistent algorithm. In particular, we replace the a posteriori sparsity of the original optimization method of IVDD by randomly selecting a priori landmark samples in the dataset. We find this second approximation to be superior. Compared to the original IVDD with the RBF kernel, it achieves high accuracy, is much faster, and grants huge memory savings.
2020
Decherchi S., Cavalli A. (2020). Fast and Memory-Efficient Import Vector Domain Description. NEURAL PROCESSING LETTERS, 52(1), 511-524 [10.1007/s11063-020-10243-6].
Decherchi S.; Cavalli A.
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/781823
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact