|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM355202700 |
003 |
DE-627 |
005 |
20231226063840.0 |
007 |
cr uuu---uuuuu |
008 |
231226s2022 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2022.3226417
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1183.xml
|
035 |
|
|
|a (DE-627)NLM355202700
|
035 |
|
|
|a (NLM)37015524
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Yang, Li
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Blind VQA on 360° Video via Progressively Learning from Pixels, Frames and Video
|
264 |
|
1 |
|c 2022
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 04.04.2023
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status Publisher
|
520 |
|
|
|a Blind visual quality assessment (BVQA) on 360° video plays a key role in optimizing immersive multimedia systems. When assessing the quality of 360° video, human tends to perceive its quality degradation from the viewport-based spatial distortion of each spherical frame to motion artifact across adjacent frames, ending with the video-level quality score, i.e., a progressive quality assessment paradigm. However, the existing BVQA approaches for 360° video neglect this paradigm. In this paper, we take into account the progressive paradigm of human perception towards spherical video quality, and thus propose a novel BVQA approach (namely ProVQA) for 360° video via progressively learning from pixels, frames and video. Corresponding to the progressive learning of pixels, frames and video, three sub-nets are designed in our ProVQA approach, i.e., the spherical perception aware quality prediction (SPAQ), motion perception aware quality prediction (MPAQ) and multi-frame temporal non-local (MFTN) sub-nets. The SPAQ sub-net first models the spatial quality degradation based on spherical perception mechanism of human. Then, by exploiting motion cues across adjacent frames, the MPAQ sub-net properly incorporates motion contextual information for quality assessment on 360° video. Finally, the MFTN sub-net aggregates multi-frame quality degradation to yield the final quality score, via exploring long-term quality correlation from multiple frames. The experiments validate that our approach significantly advances the state-of-the-art BVQA performance on 360° video over two datasets, the code of which has been public in https://github.com/yanglixiaoshen/ProVQA
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Xu, Mai
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Li, Shengxi
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Guo, Yichen
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wang, Zulin
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g PP(2022) vom: 07. Dez.
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:PP
|g year:2022
|g day:07
|g month:12
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2022.3226417
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d PP
|j 2022
|b 07
|c 12
|