×

Selection methods for extended least squares support vector machines. (English) Zbl 1146.68439

Summary: The purpose of this paper is to present extended Least Squares Support Vector Machines (LS-SVM) where data selection methods are used to get sparse LS-SVM solution, and to overview and compare the most important data selection approaches.
The selection methods are compared based on their theoretical background and using extensive simulations.
The paper shows that partial reduction is an efficient way of getting a reduced complexity sparse LS-SVM solution, while partial reduction exploits full knowledge contained in the whole training data set. It also shows that the reduction technique based on Reduced Row Echelon Form (RREF) of the kernel matrix is superior when compared to other data selection approaches.
Data selection for getting a sparse LS-SVM solution can be done in the different representations of the training data: in the input space, in the intermediate feature space, and in the kernel space. Selection in the kernel space can be obtained by finding an approximate basis of the kernel matrix.
The RREF-based method is a data selection approach with a favorable property: there is a trade-off tolerance parameter that can be used for balancing complexity and accuracy.
The paper gives contributions to the construction of high-performance and moderate complexity LS-SVMs.

MSC:

68T05 Learning and adaptive systems in artificial intelligence

Software:

RSVM

References:

[1] Baudat, G. and Anouar, F. (2001), ”Kernel-based methods and function approximation”,Proc. IJCNN, Washington, DC, July, pp. 1244-9. · doi:10.1109/IJCNN.2001.939539
[2] Cawley, G.C. and Talbot, N.L.C. (2002a), ”Efficient formation of a basis in a kernel induced feature space”,Proceedings of the European Symposium on Artificial Neural Networks (ESANN-2002), Bruges, Belgium, April 24-26, pp. 1-6.
[3] DOI: 10.1023/A:1021798002258 · Zbl 1008.68774 · doi:10.1023/A:1021798002258
[4] DOI: 10.1080/00401706.1970.10488634 · doi:10.1080/00401706.1970.10488634
[5] DOI: 10.1080/03610927708827533 · Zbl 0376.62035 · doi:10.1080/03610927708827533
[6] Lee, Y.J. and Mangasarian, O.L. (2001), ”RSVM: reduced support vector machines”,Proceedings of the First SIAM International Conference on Data Mining, Chicago. · doi:10.1137/1.9781611972719.13
[7] Osuna, E., Freund, R. and Girosi, F. (1997), ”Improved training algorithm for support vector machines”,Proc. IEEE NNSP ’97. · doi:10.1109/NNSP.1997.622408
[8] DOI: 10.1109/72.788641 · doi:10.1109/72.788641
[9] Suykens, J.A.K., Lukas, L. and Vandewalle, J. (2000a), ”Sparse least squares support vector machine classifiers”,ESANN’2000 European Symposium on Artificial Neural Networks, pp. 37-42.
[10] DOI: 10.1109/IJCNN.2004.1379967 · doi:10.1109/IJCNN.2004.1379967
[11] Valyon, J. and Horváth, G. (2006), ”Extended least squares LS-SVM”,International Journal of Computational Intelligence, Vol. 3 No. 3, pp. 234-42.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.