Support vector machines (SVMs) are a widely used technique for classification, clustering and data analysis. While efficient algorithms for training SVM are available, dealing with large datasets makes training and classification a computationally challenging problem. In this paper we exploit modern processor architectures to improve the training speed of LIBSVM, a well known implementation of the sequential minimal optimization algorithm. We describe LIBSVMCBE, an optimized version of LIBSVM which takes advantage of the peculiar architecture of the Cell Broadband Engine. We assess the performance of LIBSVMCBE on real-world training problems, and we show how this optimization is particularly effective on large, dense datasets.
M. Marzolla (2011). Fast training of support vector machines on the Cell processor. NEUROCOMPUTING, 74(17), 3700-3707 [10.1016/j.neucom.2011.04.011].
Fast training of support vector machines on the Cell processor
MARZOLLA, MORENO
2011
Abstract
Support vector machines (SVMs) are a widely used technique for classification, clustering and data analysis. While efficient algorithms for training SVM are available, dealing with large datasets makes training and classification a computationally challenging problem. In this paper we exploit modern processor architectures to improve the training speed of LIBSVM, a well known implementation of the sequential minimal optimization algorithm. We describe LIBSVMCBE, an optimized version of LIBSVM which takes advantage of the peculiar architecture of the Cell Broadband Engine. We assess the performance of LIBSVMCBE on real-world training problems, and we show how this optimization is particularly effective on large, dense datasets.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.