|
|
|
|
LEADER |
01000caa a22002652c 4500 |
001 |
NLM355353458 |
003 |
DE-627 |
005 |
20250304152801.0 |
007 |
cr uuu---uuuuu |
008 |
231226s2023 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2023.3261747
|2 doi
|
028 |
5 |
2 |
|a pubmed25n1184.xml
|
035 |
|
|
|a (DE-627)NLM355353458
|
035 |
|
|
|a (NLM)37030697
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Gao, Guangwei
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a CTCNet
|b A CNN-Transformer Cooperation Network for Face Image Super-Resolution
|
264 |
|
1 |
|c 2023
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 11.04.2023
|
500 |
|
|
|a Date Revised 11.04.2023
|
500 |
|
|
|a published: Print
|
500 |
|
|
|a Citation Status MEDLINE
|
520 |
|
|
|a Recently, deep convolution neural networks (CNNs) steered face super-resolution methods have achieved great progress in restoring degraded facial details by joint training with facial priors. However, these methods have some obvious limitations. On the one hand, multi-task joint learning requires additional marking on the dataset, and the introduced prior network will significantly increase the computational cost of the model. On the other hand, the limited receptive field of CNN will reduce the fidelity and naturalness of the reconstructed facial images, resulting in suboptimal reconstructed images. In this work, we propose an efficient CNN-Transformer Cooperation Network (CTCNet) for face super-resolution tasks, which uses the multi-scale connected encoder-decoder architecture as the backbone. Specifically, we first devise a novel Local-Global Feature Cooperation Module (LGCM), which is composed of a Facial Structure Attention Unit (FSAU) and a Transformer block, to promote the consistency of local facial detail and global facial structure restoration simultaneously. Then, we design an efficient Feature Refinement Module (FRM) to enhance the encoded features. Finally, to further improve the restoration of fine facial details, we present a Multi-scale Feature Fusion Unit (MFFU) to adaptively fuse the features from different stages in the encoder procedure. Extensive evaluations on various datasets have assessed that the proposed CTCNet can outperform other state-of-the-art methods significantly. Source code will be available at https://github.com/IVIPLab/CTCNet
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Xu, Zixiang
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Li, Juncheng
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Yang, Jian
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zeng, Tieyong
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Qi, Guo-Jun
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 32(2023) vom: 01., Seite 1978-1991
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnas
|
773 |
1 |
8 |
|g volume:32
|g year:2023
|g day:01
|g pages:1978-1991
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2023.3261747
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 32
|j 2023
|b 01
|h 1978-1991
|