|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM333258843 |
003 |
DE-627 |
005 |
20231225221305.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2022 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2021.3127323
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1110.xml
|
035 |
|
|
|a (DE-627)NLM333258843
|
035 |
|
|
|a (NLM)34788217
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Shao, Huajie
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a ControlVAE
|b Tuning, Analytical Properties, and Performance Analysis
|
264 |
|
1 |
|c 2022
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 08.11.2022
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a This paper reviews the novel concept of a controllable variational autoencoder (ControlVAE), discusses its parameter tuning to meet application needs, derives its key analytic properties, and offers useful extensions and applications. ControlVAE is a new variational autoencoder (VAE) framework that combines automatic control theory with the basic VAE to stabilize the KL-divergence of VAE models to a specified value. It leverages a non-linear PI controller, a variant of the proportional-integral-derivative (PID) controller, to dynamically tune the weight of the KL-divergence term in the evidence lower bound (ELBO) using the output KL-divergence as feedback. This allows us to precisely control the KL-divergence to a desired value (set point) that is effective in avoiding posterior collapse and learning disentangled representations. While prior work developed alternative techniques for controlling the KL divergence, we show that our PI controller has better stability properties and thus better convergence, thereby producing better disentangled representations from finite training data. In order to improve the ELBO of ControlVAE over that of the regular VAE, we provide a simplified theoretical analysis to inform the choice of set point for the KL-divergence of ControlVAE. We evaluate the proposed method on three tasks: image generation, language modeling, and disentangled representation learning. The results show that ControlVAE can achieve much better reconstruction quality than the other methods for comparable disentanglement. On the language modeling task, our method can avoid posterior collapse (KL vanishing) and improve the diversity of generated text. Moreover, it can change the optimization trajectory, improving the ELBO and the reconstruction quality for image generation
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Xiao, Zhisheng
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Yao, Shuochao
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Sun, Dachun
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhang, Aston
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Liu, Shengzhong
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wang, Tianshi
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Li, Jinyang
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Abdelzaher, Tarek
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 44(2022), 12 vom: 01. Dez., Seite 9285-9297
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:44
|g year:2022
|g number:12
|g day:01
|g month:12
|g pages:9285-9297
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2021.3127323
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 44
|j 2022
|e 12
|b 01
|c 12
|h 9285-9297
|