|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM272548472 |
003 |
DE-627 |
005 |
20231224235112.0 |
007 |
cr uuu---uuuuu |
008 |
231224s2017 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2017.2708503
|2 doi
|
028 |
5 |
2 |
|a pubmed24n0908.xml
|
035 |
|
|
|a (DE-627)NLM272548472
|
035 |
|
|
|a (NLM)28574353
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Kede Ma
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a dipIQ
|b Blind Image Quality Assessment by Learning-to-Rank Discriminable Image Pairs
|
264 |
|
1 |
|c 2017
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 11.12.2018
|
500 |
|
|
|a Date Revised 11.12.2018
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Objective assessment of image quality is fundamentally important in many image processing tasks. In this paper, we focus on learning blind image quality assessment (BIQA) models, which predict the quality of a digital image with no access to its original pristine-quality counterpart as reference. One of the biggest challenges in learning BIQA models is the conflict between the gigantic image space (which is in the dimension of the number of image pixels) and the extremely limited reliable ground truth data for training. Such data are typically collected via subjective testing, which is cumbersome, slow, and expensive. Here, we first show that a vast amount of reliable training data in the form of quality-discriminable image pairs (DIPs) can be obtained automatically at low cost by exploiting large-scale databases with diverse image content. We then learn an opinion-unaware BIQA (OU-BIQA, meaning that no subjective opinions are used for training) model using RankNet, a pairwise learning-to-rank (L2R) algorithm, from millions of DIPs, each associated with a perceptual uncertainty level, leading to a DIP inferred quality (dipIQ) index. Extensive experiments on four benchmark IQA databases demonstrate that dipIQ outperforms the state-of-the-art OU-BIQA models. The robustness of dipIQ is also significantly improved as confirmed by the group MAximum Differentiation competition method. Furthermore, we extend the proposed framework by learning models with ListNet (a listwise L2R algorithm) on quality-discriminable image lists (DIL). The resulting DIL inferred quality index achieves an additional performance gain
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Wentao Liu
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Tongliang Liu
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhou Wang
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Dacheng Tao
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 26(2017), 8 vom: 02. Aug., Seite 3951-3964
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:26
|g year:2017
|g number:8
|g day:02
|g month:08
|g pages:3951-3964
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2017.2708503
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 26
|j 2017
|e 8
|b 02
|c 08
|h 3951-3964
|