|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM339468580 |
003 |
DE-627 |
005 |
20231226002950.0 |
007 |
cr uuu---uuuuu |
008 |
231226s2023 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2022.3167175
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1131.xml
|
035 |
|
|
|a (DE-627)NLM339468580
|
035 |
|
|
|a (NLM)35417348
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Zamir, Syed Waqas
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Learning Enriched Features for Fast Image Restoration and Enhancement
|
264 |
|
1 |
|c 2023
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 06.04.2023
|
500 |
|
|
|a Date Revised 06.04.2023
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Given a degraded input image, image restoration aims to recover the missing high-quality image content. Numerous applications demand effective image restoration, e.g., computational photography, surveillance, autonomous vehicles, and remote sensing. Significant advances in image restoration have been made in recent years, dominated by convolutional neural networks (CNNs). The widely-used CNN-based methods typically operate either on full-resolution or on progressively low-resolution representations. In the former case, spatial details are preserved but the contextual information cannot be precisely encoded. In the latter case, generated outputs are semantically reliable but spatially less accurate. This paper presents a new architecture with a holistic goal of maintaining spatially-precise high-resolution representations through the entire network, and receiving complementary contextual information from the low-resolution representations. The core of our approach is a multi-scale residual block containing the following key elements: (a) parallel multi-resolution convolution streams for extracting multi-scale features, (b) information exchange across the multi-resolution streams, (c) non-local attention mechanism for capturing contextual information, and (d) attention based multi-scale feature aggregation. Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details. Extensive experiments on six real image benchmark datasets demonstrate that our method, named as MIRNet-v2, achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement. The source code and pre-trained models are available at https://github.com/swz30/MIRNetv2
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Arora, Aditya
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Khan, Salman
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Hayat, Munawar
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Khan, Fahad Shahbaz
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Yang, Ming-Hsuan
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Shao, Ling
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 45(2023), 2 vom: 13. Feb., Seite 1934-1948
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:45
|g year:2023
|g number:2
|g day:13
|g month:02
|g pages:1934-1948
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2022.3167175
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 45
|j 2023
|e 2
|b 13
|c 02
|h 1934-1948
|