Transformer Based Pluralistic Image Completion With Reduced Information Loss

Transformer based methods have achieved great success in image inpainting recently. However, we find that these solutions regard each pixel as a token, thus suffering from an information loss issue from two aspects: 1) They downsample the input image into much lower resolutions for efficiency consid...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 46(2024), 10 vom: 31. Okt., Seite 6652-6668
1. Verfasser: Liu, Qiankun (VerfasserIn)
Weitere Verfasser: Jiang, Yuqi, Tan, Zhentao, Chen, Dongdong, Fu, Ying, Chu, Qi, Hua, Gang, Yu, Nenghai
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000caa a22002652c 4500
001 NLM370538471
003 DE-627
005 20250306005310.0
007 cr uuu---uuuuu
008 240404s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2024.3384406  |2 doi 
028 5 2 |a pubmed25n1234.xml 
035 |a (DE-627)NLM370538471 
035 |a (NLM)38564348 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Liu, Qiankun  |e verfasserin  |4 aut 
245 1 0 |a Transformer Based Pluralistic Image Completion With Reduced Information Loss 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 06.09.2024 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Transformer based methods have achieved great success in image inpainting recently. However, we find that these solutions regard each pixel as a token, thus suffering from an information loss issue from two aspects: 1) They downsample the input image into much lower resolutions for efficiency consideration. 2) They quantize 2563 RGB values to a small number (such as 512) of quantized color values. The indices of quantized pixels are used as tokens for the inputs and prediction targets of the transformer. To mitigate these issues, we propose a new transformer based framework called "PUT". Specifically, to avoid input downsampling while maintaining computation efficiency, we design a patch-based auto-encoder P-VQVAE. The encoder converts the masked image into non-overlapped patch tokens and the decoder recovers the masked regions from the inpainted tokens while keeping the unmasked regions unchanged. To eliminate the information loss caused by input quantization, an Un-quantized Transformer is applied. It directly takes features from the P-VQVAE encoder as input without any quantization and only regards the quantized tokens as prediction targets. Furthermore, to make the inpainting process more controllable, we introduce semantic and structural conditions as extra guidance. Extensive experiments show that our method greatly outperforms existing transformer based methods on image fidelity and achieves much higher diversity and better fidelity than state-of-the-art pluralistic inpainting methods on complex large-scale datasets (e.g., ImageNet) 
650 4 |a Journal Article 
700 1 |a Jiang, Yuqi  |e verfasserin  |4 aut 
700 1 |a Tan, Zhentao  |e verfasserin  |4 aut 
700 1 |a Chen, Dongdong  |e verfasserin  |4 aut 
700 1 |a Fu, Ying  |e verfasserin  |4 aut 
700 1 |a Chu, Qi  |e verfasserin  |4 aut 
700 1 |a Hua, Gang  |e verfasserin  |4 aut 
700 1 |a Yu, Nenghai  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 46(2024), 10 vom: 31. Okt., Seite 6652-6668  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnas 
773 1 8 |g volume:46  |g year:2024  |g number:10  |g day:31  |g month:10  |g pages:6652-6668 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2024.3384406  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 46  |j 2024  |e 10  |b 31  |c 10  |h 6652-6668