|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM328551031 |
003 |
DE-627 |
005 |
20231225203155.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2021 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2021.3098245
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1095.xml
|
035 |
|
|
|a (DE-627)NLM328551031
|
035 |
|
|
|a (NLM)34310304
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Yang, Jiachen
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a No-Reference Quality Assessment for Screen Content Images Using Visual Edge Model and AdaBoosting Neural Network
|
264 |
|
1 |
|c 2021
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 02.08.2021
|
500 |
|
|
|a Date Revised 02.08.2021
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a In this paper, a competitive no-reference metric is proposed to assess the perceptive quality of screen content images (SCIs), which uses the human visual edge model and AdaBoosting neural network. Inspired by the existing theory that the edge information which reflects the visual quality of SCI is effectively captured by the human visual difference of the Gaussian (DOG) model, we compute two types of multi-scale edge maps via the DOG operator firstly. Specifically, two types of edge maps contain contour and edge information respectively. Then after locally normalizing edge maps, L -moments distribution estimation is utilized to fit their DOG coefficients, and the fitted L -moments parameters can be regarded as edge features. Finally, to obtain the final perceptive quality score, we use an AdaBoosting back-propagation neural network (ABPNN) to map the quality-aware features to the perceptual quality score of SCIs. The reason why the ABPNN is regarded as the appropriate approach for the visual quality assessment of SCIs is that we abandon the regression network with a shallow structure, try a regression network with a deep architecture, and achieve a good generalization ability. The proposed method delivers highly competitive performance and shows high consistency with the human visual system (HVS) on the public SCI-oriented databases
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Bian, Zilin
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Liu, Jiacheng
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Jiang, Bin
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Lu, Wen
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Gao, Xinbo
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Song, Houbing
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 30(2021) vom: 26., Seite 6801-6814
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:30
|g year:2021
|g day:26
|g pages:6801-6814
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2021.3098245
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 30
|j 2021
|b 26
|h 6801-6814
|