Manifold Neural Network With Non-Gradient Optimization

Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent and thus has a slow convergence. In addition, softmax, as a decision layer, may ignore the distribution information of the data during classification. Aiming to tackle the referred problems, we propose...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 45(2023), 3 vom: 12. März, Seite 3986-3993
1. Verfasser: Zhang, Rui (VerfasserIn)
Weitere Verfasser: Jiao, Ziheng, Zhang, Hongyuan, Li, Xuelong
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM340800747
003 DE-627
005 20231226010305.0
007 cr uuu---uuuuu
008 231226s2023 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2022.3174574  |2 doi 
028 5 2 |a pubmed24n1135.xml 
035 |a (DE-627)NLM340800747 
035 |a (NLM)35552149 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Zhang, Rui  |e verfasserin  |4 aut 
245 1 0 |a Manifold Neural Network With Non-Gradient Optimization 
264 1 |c 2023 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 07.04.2023 
500 |a Date Revised 07.04.2023 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent and thus has a slow convergence. In addition, softmax, as a decision layer, may ignore the distribution information of the data during classification. Aiming to tackle the referred problems, we propose a novel manifold neural network based on non-gradient optimization, i.e., the analytical-form solutions. Considering that the activation function is generally invertible, we reconstruct the network via forward ridge regression and low-rank backward approximation, which achieve rapid convergence. Moreover, by unifying the flexible Stiefel manifold and adaptive support vector machine, we devise the novel decision layer which efficiently fits the manifold structure of the data and label information. Consequently, a jointly non-gradient optimization method is designed to generate the network with analytical-form results. Furthermore, an acceleration strategy is utilize to reduce the time complexity for handling high dimensional datasets. Eventually, extensive experiments validate the superior performance of the model 
650 4 |a Journal Article 
700 1 |a Jiao, Ziheng  |e verfasserin  |4 aut 
700 1 |a Zhang, Hongyuan  |e verfasserin  |4 aut 
700 1 |a Li, Xuelong  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 45(2023), 3 vom: 12. März, Seite 3986-3993  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:45  |g year:2023  |g number:3  |g day:12  |g month:03  |g pages:3986-3993 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2022.3174574  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 45  |j 2023  |e 3  |b 12  |c 03  |h 3986-3993