A Progressive Fusion Generative Adversarial Network for Realistic and Consistent Video Super-Resolution

How to effectively fuse temporal information from consecutive frames remains to be a non-trivial problem in video super-resolution (SR), since most existing fusion strategies (direct fusion, slow fusion, or 3D convolution) either fail to make full use of temporal information or cost too much calcula...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 44(2022), 5 vom: 03. Mai, Seite 2264-2280
1. Verfasser: Yi, Peng (VerfasserIn)
Weitere Verfasser: Wang, Zhongyuan, Jiang, Kui, Jiang, Junjun, Lu, Tao, Ma, Jiayi
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2022
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM318367076
003 DE-627
005 20231225165216.0
007 cr uuu---uuuuu
008 231225s2022 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2020.3042298  |2 doi 
028 5 2 |a pubmed24n1061.xml 
035 |a (DE-627)NLM318367076 
035 |a (NLM)33270559 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Yi, Peng  |e verfasserin  |4 aut 
245 1 2 |a A Progressive Fusion Generative Adversarial Network for Realistic and Consistent Video Super-Resolution 
264 1 |c 2022 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 04.04.2022 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a How to effectively fuse temporal information from consecutive frames remains to be a non-trivial problem in video super-resolution (SR), since most existing fusion strategies (direct fusion, slow fusion, or 3D convolution) either fail to make full use of temporal information or cost too much calculation. To this end, we propose a novel progressive fusion network for video SR, in which frames are processed in a way of progressive separation and fusion for the thorough utilization of spatio-temporal information. We particularly incorporate multi-scale structure and hybrid convolutions into the network to capture a wide range of dependencies. We further propose a non-local operation to extract long-range spatio-temporal correlations directly, taking place of traditional motion estimation and motion compensation (ME&MC). This design relieves the complicated ME&MC algorithms, but enjoys better performance than various ME&MC schemes. Finally, we improve generative adversarial training for video SR to avoid temporal artifacts such as flickering and ghosting. In particular, we propose a frame variation loss with a single-sequence training method to generate more realistic and temporally consistent videos. Extensive experiments on public datasets show the superiority of our method over state-of-the-art methods in terms of performance and complexity. Our code is available at https://github.com/psychopa4/MSHPFNL 
650 4 |a Journal Article 
700 1 |a Wang, Zhongyuan  |e verfasserin  |4 aut 
700 1 |a Jiang, Kui  |e verfasserin  |4 aut 
700 1 |a Jiang, Junjun  |e verfasserin  |4 aut 
700 1 |a Lu, Tao  |e verfasserin  |4 aut 
700 1 |a Ma, Jiayi  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 44(2022), 5 vom: 03. Mai, Seite 2264-2280  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:44  |g year:2022  |g number:5  |g day:03  |g month:05  |g pages:2264-2280 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2020.3042298  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 44  |j 2022  |e 5  |b 03  |c 05  |h 2264-2280