Sparse SVM for Sufficient Data Reduction
Kernel-based methods for support vector machines (SVM) have shown highly advantageous performance in various applications. However, they may incur prohibitive computational costs for large-scale sample datasets. Therefore, data reduction (reducing the number of support vectors) appears to be necessa...
Veröffentlicht in: | IEEE transactions on pattern analysis and machine intelligence. - 1979. - 44(2022), 9 vom: 23. Sept., Seite 5560-5571 |
---|---|
1. Verfasser: | |
Format: | Online-Aufsatz |
Sprache: | English |
Veröffentlicht: |
2022
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on pattern analysis and machine intelligence |
Schlagworte: | Journal Article |
Zusammenfassung: | Kernel-based methods for support vector machines (SVM) have shown highly advantageous performance in various applications. However, they may incur prohibitive computational costs for large-scale sample datasets. Therefore, data reduction (reducing the number of support vectors) appears to be necessary, which gives rise to the topic of the sparse SVM. Motivated by this problem, the sparsity constrained kernel SVM optimization has been considered in this paper in order to control the number of support vectors. Based on the established optimality conditions associated with the stationary equations, a Newton-type method is developed to handle the sparsity constrained optimization. This method is found to enjoy the one-step convergence property if the starting point is chosen to be close to a local region of a stationary point, thereby leading to a super-high computational speed. Numerical comparisons with several powerful solvers demonstrate that the proposed method performs exceptionally well, particularly for large-scale datasets in terms of a much lower number of support vectors and shorter computational time |
---|---|
Beschreibung: | Date Revised 05.08.2022 published: Print-Electronic Citation Status PubMed-not-MEDLINE |
ISSN: | 1939-3539 |
DOI: | 10.1109/TPAMI.2021.3075339 |