|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM355330040 |
003 |
DE-627 |
005 |
20231226064123.0 |
007 |
cr uuu---uuuuu |
008 |
231226s2023 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2023.3255107
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1184.xml
|
035 |
|
|
|a (DE-627)NLM355330040
|
035 |
|
|
|a (NLM)37028347
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Wei, Pengxu
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Taylor Neural Network for Real-World Image Super-Resolution
|
264 |
|
1 |
|c 2023
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 07.04.2023
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status Publisher
|
520 |
|
|
|a Due to the difficulty of collecting paired Low-Resolution (LR) and High-Resolution (HR) images, the recent research on single image Super-Resolution (SR) has often been criticized for the data bottleneck of the synthetic image degradation between LRs and HRs. Recently, the emergence of real-world SR datasets, e.g., RealSR and DRealSR, promotes the exploration of Real-World image Super-Resolution (RWSR). RWSR exposes a more practical image degradation, which greatly challenges the learning capacity of deep neural networks to reconstruct high-quality images from low-quality images collected in realistic scenarios. In this paper, we explore Taylor series approximation in prevalent deep neural networks for image reconstruction, and propose a very general Taylor architecture to derive Taylor Neural Networks (TNNs) in a principled manner. Our TNN builds Taylor Modules with Taylor Skip Connections (TSCs) to approximate the feature projection functions, following the spirit of Taylor Series. TSCs introduce the input connected directly with each layer at different layers, to sequentially produces different high-order Taylor maps to attend more image details, and then aggregate the different high-order information from different layers. Only via simple skip connections, TNN is compatible with various existing neural networks to effectively learn high-order components of the input image with little increase of parameters. Furthermore, we have conducted extensive experiments to evaluate our TNNs in different backbones on two RWSR benchmarks, which achieve a superior performance in comparison with existing baseline methods
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Xie, Ziwei
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Li, Guanbin
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Lin, Liang
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g PP(2023) vom: 14. März
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:PP
|g year:2023
|g day:14
|g month:03
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2023.3255107
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d PP
|j 2023
|b 14
|c 03
|