Virtual Adversarial Training : A Regularization Method for Supervised and Semi-Supervised Learning

We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local pertur...

Description complète

Détails bibliographiques
Publié dans:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 41(2019), 8 vom: 24. Aug., Seite 1979-1993
Auteur principal: Miyato, Takeru (Auteur)
Autres auteurs: Maeda, Shin-Ichi, Koyama, Masanori, Ishii, Shin
Format: Article en ligne
Langue:English
Publié: 2019
Accès à la collection:IEEE transactions on pattern analysis and machine intelligence
Sujets:Journal Article
LEADER 01000caa a22002652c 4500
001 NLM286809184
003 DE-627
005 20250223205356.0
007 cr uuu---uuuuu
008 231225s2019 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2018.2858821  |2 doi 
028 5 2 |a pubmed25n0955.xml 
035 |a (DE-627)NLM286809184 
035 |a (NLM)30040630 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Miyato, Takeru  |e verfasserin  |4 aut 
245 1 0 |a Virtual Adversarial Training  |b A Regularization Method for Supervised and Semi-Supervised Learning 
264 1 |c 2019 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 23.07.2019 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only "virtually" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10 
650 4 |a Journal Article 
700 1 |a Maeda, Shin-Ichi  |e verfasserin  |4 aut 
700 1 |a Koyama, Masanori  |e verfasserin  |4 aut 
700 1 |a Ishii, Shin  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 41(2019), 8 vom: 24. Aug., Seite 1979-1993  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnas 
773 1 8 |g volume:41  |g year:2019  |g number:8  |g day:24  |g month:08  |g pages:1979-1993 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2018.2858821  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 41  |j 2019  |e 8  |b 24  |c 08  |h 1979-1993