OSLNet : Deep Small-Sample Classification with an Orthogonal Softmax Layer

A deep neural network of multiple nonlinear layers forms a large function space, which can easily lead to overfitting when it encounters small-sample data. To mitigate overfitting in small-sample classification, learning more discriminative features from small-sample data is becoming a new trend. To...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - (2020) vom: 06. Mai
1. Verfasser: Li, Xiaoxu (VerfasserIn)
Weitere Verfasser: Chang, Dongliang, Ma, Zhanyu, Tan, Zheng-Hua, Xue, Jing-Hao, Cao, Jie, Yu, Jingyi, Guo, Jun
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2020
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM309693632
003 DE-627
005 20240229162840.0
007 cr uuu---uuuuu
008 231225s2020 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2020.2990277  |2 doi 
028 5 2 |a pubmed24n1308.xml 
035 |a (DE-627)NLM309693632 
035 |a (NLM)32386152 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Li, Xiaoxu  |e verfasserin  |4 aut 
245 1 0 |a OSLNet  |b Deep Small-Sample Classification with an Orthogonal Softmax Layer 
264 1 |c 2020 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 27.02.2024 
500 |a published: Print-Electronic 
500 |a Citation Status Publisher 
520 |a A deep neural network of multiple nonlinear layers forms a large function space, which can easily lead to overfitting when it encounters small-sample data. To mitigate overfitting in small-sample classification, learning more discriminative features from small-sample data is becoming a new trend. To this end, this paper aims to find a subspace of neural networks that can facilitate a large decision margin. Specifically, we propose the Orthogonal Softmax Layer (OSL), which makes the weight vectors in the classification layer remain orthogonal during both the training and test processes. The Rademacher complexity of a network using the OSL is only 1/K, where K is the number of classes, of that of a network using the fully connected classification layer, leading to a tighter generalization error bound. Experimental results demonstrate that the proposed OSL has better performance than the methods used for comparison on four small-sample benchmark datasets, as well as its applicability to large-sample datasets. Codes are available at: https://github.com/dongliangchang/OSLNet 
650 4 |a Journal Article 
700 1 |a Li, Xiaoxu  |e verfasserin  |4 aut 
700 1 |a Chang, Dongliang  |e verfasserin  |4 aut 
700 1 |a Ma, Zhanyu  |e verfasserin  |4 aut 
700 1 |a Tan, Zheng-Hua  |e verfasserin  |4 aut 
700 1 |a Xue, Jing-Hao  |e verfasserin  |4 aut 
700 1 |a Cao, Jie  |e verfasserin  |4 aut 
700 1 |a Yu, Jingyi  |e verfasserin  |4 aut 
700 1 |a Guo, Jun  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g (2020) vom: 06. Mai  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g year:2020  |g day:06  |g month:05 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2020.2990277  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |j 2020  |b 06  |c 05