|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM336154445 |
003 |
DE-627 |
005 |
20231225231507.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2022 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2022.3144892
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1120.xml
|
035 |
|
|
|a (DE-627)NLM336154445
|
035 |
|
|
|a (NLM)35081029
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Pan, Zhaoqing
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a VCRNet
|b Visual Compensation Restoration Network for No-Reference Image Quality Assessment
|
264 |
|
1 |
|c 2022
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 02.02.2022
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Guided by the free-energy principle, generative adversarial networks (GAN)-based no-reference image quality assessment (NR-IQA) methods have improved the image quality prediction accuracy. However, the GAN cannot well handle the restoration task for the free-energy principle-guided NR-IQA methods, especially for the severely destroyed images, which results in that the quality reconstruction relationship between the distorted image and its restored image cannot be accurately built. To address this problem, a visual compensation restoration network (VCRNet)-based NR-IQA method is proposed, which uses a non-adversarial model to efficiently handle the distorted image restoration task. The proposed VCRNet consists of a visual restoration network and a quality estimation network. To accurately build the quality reconstruction relationship between the distorted image and its restored image, a visual compensation module, an optimized asymmetric residual block, and an error map-based mixed loss function, are proposed for increasing the restoration capability of the visual restoration network. For further addressing the NR-IQA problem of severely destroyed images, the multi-level restoration features which are obtained from the visual restoration network are used for the image quality estimation. To prove the effectiveness of the proposed VCRNet, seven representative IQA databases are used, and experimental results show that the proposed VCRNet achieves the state-of-the-art image quality prediction accuracy. The implementation of the proposed VCRNet has been released at https://github.com/NUIST-Videocoding/VCRNet
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Yuan, Feng
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Lei, Jianjun
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Fang, Yuming
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Shao, Xiao
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Kwong, Sam
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 31(2022) vom: 01., Seite 1613-1627
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:31
|g year:2022
|g day:01
|g pages:1613-1627
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2022.3144892
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 31
|j 2022
|b 01
|h 1613-1627
|