Self-Supervised Learning by Estimating Twin Class Distribution

We present Twist, a simple and theoretically explainable self-supervised representation learning method by classifying large-scale unlabeled datasets in an end-to-end way. We employ a siamese network terminated by a softmax operation to produce twin class distributions of two augmented images. Witho...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 32(2023) vom: 14., Seite 2228-2236
1. Verfasser: Wang, Feng (VerfasserIn)
Weitere Verfasser: Kong, Tao, Zhang, Rufeng, Liu, Huaping, Li, Hang
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM355625385
003 DE-627
005 20231226064735.0
007 cr uuu---uuuuu
008 231226s2023 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2023.3266169  |2 doi 
028 5 2 |a pubmed24n1185.xml 
035 |a (DE-627)NLM355625385 
035 |a (NLM)37058381 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Wang, Feng  |e verfasserin  |4 aut 
245 1 0 |a Self-Supervised Learning by Estimating Twin Class Distribution 
264 1 |c 2023 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 24.04.2023 
500 |a Date Revised 24.04.2023 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a We present Twist, a simple and theoretically explainable self-supervised representation learning method by classifying large-scale unlabeled datasets in an end-to-end way. We employ a siamese network terminated by a softmax operation to produce twin class distributions of two augmented images. Without supervision, we enforce the class distributions of different augmentations to be consistent. However, simply minimizing the divergence between augmentations will generate collapsed solutions, i.e., outputting the same class distribution for all images. In this case, little information about the input images is preserved. To solve this problem, we propose to maximize the mutual information between the input image and the output class predictions. Specifically, we minimize the entropy of the distribution for each sample to make the class prediction assertive, and maximize the entropy of the mean distribution to make the predictions of different samples diverse. In this way, Twist can naturally avoid the collapsed solutions without specific designs such as asymmetric network, stop-gradient operation, or momentum encoder. As a result, Twist outperforms previous state-of-the-art methods on a wide range of tasks. Specifically on the semi-supervised classification task, Twist achieves 61.2% top-1 accuracy with 1% ImageNet labels using a ResNet-50 as backbone, surpassing previous best results by an improvement of 6.2%. Codes and pre-trained models are available at https://github.com/bytedance/TWIST 
650 4 |a Journal Article 
700 1 |a Kong, Tao  |e verfasserin  |4 aut 
700 1 |a Zhang, Rufeng  |e verfasserin  |4 aut 
700 1 |a Liu, Huaping  |e verfasserin  |4 aut 
700 1 |a Li, Hang  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g 32(2023) vom: 14., Seite 2228-2236  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g volume:32  |g year:2023  |g day:14  |g pages:2228-2236 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2023.3266169  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 32  |j 2023  |b 14  |h 2228-2236