Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding

Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminat...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 27(2018), 7 vom: 31. Juli, Seite 3178-3193
1. Verfasser: Ki, Sehwan (VerfasserIn)
Weitere Verfasser: Bae, Sung-Ho, Kim, Munchurl, Ko, Hyunsuk
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2018
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM282900829
003 DE-627
005 20231225035024.0
007 cr uuu---uuuuu
008 231225s2018 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2018.2818439  |2 doi 
028 5 2 |a pubmed24n0943.xml 
035 |a (DE-627)NLM282900829 
035 |a (NLM)29641399 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Ki, Sehwan  |e verfasserin  |4 aut 
245 1 0 |a Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding 
264 1 |c 2018 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 30.07.2018 
500 |a Date Revised 30.07.2018 
500 |a published: Print 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied 
650 4 |a Journal Article 
700 1 |a Bae, Sung-Ho  |e verfasserin  |4 aut 
700 1 |a Kim, Munchurl  |e verfasserin  |4 aut 
700 1 |a Ko, Hyunsuk  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g 27(2018), 7 vom: 31. Juli, Seite 3178-3193  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g volume:27  |g year:2018  |g number:7  |g day:31  |g month:07  |g pages:3178-3193 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2018.2818439  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 27  |j 2018  |e 7  |b 31  |c 07  |h 3178-3193