Support vector machines (SVMs) are a widely used technique for classification, clustering and data analysis. While efficient algorithms for training SVM are available, dealing with large datasets makes training and classification a computationally challenging problem. In this paper we exploit modern processor architectures to improve the training speed of LIBSVM, a well known implementation of the sequential minimal optimization algorithm. We describe LIBSVMCBE, an optimized version of LIBSVM which takes advantage of the peculiar architecture of the Cell Broadband Engine. We assess the performance of LIBSVMCBE on real-world training problems, and we show how this optimization is particularly effective on large, dense datasets.
Titolo: | Fast training of support vector machines on the Cell processor |
Autore/i: | MARZOLLA, MORENO |
Autore/i Unibo: | |
Anno: | 2011 |
Rivista: | |
Digital Object Identifier (DOI): | http://dx.doi.org/10.1016/j.neucom.2011.04.011 |
Abstract: | Support vector machines (SVMs) are a widely used technique for classification, clustering and data analysis. While efficient algorithms for training SVM are available, dealing with large datasets makes training and classification a computationally challenging problem. In this paper we exploit modern processor architectures to improve the training speed of LIBSVM, a well known implementation of the sequential minimal optimization algorithm. We describe LIBSVMCBE, an optimized version of LIBSVM which takes advantage of the peculiar architecture of the Cell Broadband Engine. We assess the performance of LIBSVMCBE on real-world training problems, and we show how this optimization is particularly effective on large, dense datasets. |
Data prodotto definitivo in UGOV: | 2013-06-08 19:13:01 |
Appare nelle tipologie: | 1.01 Articolo in rivista |