|
|
|
|
LEADER |
01000caa a22002652 4500 |
001 |
NLM292011245 |
003 |
DE-627 |
005 |
20240229162058.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2018 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2018.2886758
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1308.xml
|
035 |
|
|
|a (DE-627)NLM292011245
|
035 |
|
|
|a (NLM)30571625
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Chen, Tao
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a SS-HCNN
|b Semi-Supervised Hierarchical Convolutional Neural Network for Image Classification
|
264 |
|
1 |
|c 2018
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 27.02.2024
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status Publisher
|
520 |
|
|
|a The availability of large-scale annotated data and uneven separability of different data categories become two major impediments of deep learning for image classification. In this paper, we present a Semi-Supervised Hierarchical Convolutional Neural Network (SS-HCNN) to address these two challenges. A large-scale unsupervised maximum margin clustering technique is designed, which splits images into a number of hierarchical clusters iteratively to learn cluster-level CNNs at parent nodes and category-level CNNs at leaf nodes. The splitting uses the similarity of CNN features to group visually similar images into the same cluster, which relieves the uneven data separability constraint. With the hierarchical cluster-level CNNs capturing certain high-level image category information, the category-level CNNs can be trained with a small amount of labelled images, and this relieves the data annotation constraint. A novel cluster splitting criterion is also designed which automatically terminates the image clustering in the tree hierarchy. The proposed SS-HCNN has been evaluated on the CIFAR-100 and ImageNet classification datasets. Experiments show that the SS-HCNN trained using a portion of labelled training images can achieve comparable performance with other fully trained CNNs using all labelled images. Additionally, the SS-HCNN trained using all labelled images clearly outperforms other fully trained CNNs
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Lu, Shijian
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Fan, Jiayuan
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g (2018) vom: 14. Dez.
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g year:2018
|g day:14
|g month:12
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2018.2886758
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|j 2018
|b 14
|c 12
|