Network-based H.264/AVC whole frame loss visibility model and frame dropping methods

We examine the visual effect of whole frame loss by different decoders. Whole frame losses are introduced in H.264/AVC compressed videos which are then decoded by two different decoders with different common concealment effects: frame copy and frame interpolation. The videos are seen by human observ...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 21(2012), 8 vom: 16. Aug., Seite 3353-63
1. Verfasser: Chang, Yueh-Lun (VerfasserIn)
Weitere Verfasser: Lin, Ting-Lan, Cosman, Pamela C
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2012
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM216544793
003 DE-627
005 20231224032011.0
007 cr uuu---uuuuu
008 231224s2012 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2012.2191567  |2 doi 
028 5 2 |a pubmed24n0721.xml 
035 |a (DE-627)NLM216544793 
035 |a (NLM)22453638 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Chang, Yueh-Lun  |e verfasserin  |4 aut 
245 1 0 |a Network-based H.264/AVC whole frame loss visibility model and frame dropping methods 
264 1 |c 2012 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 07.04.2014 
500 |a Date Revised 06.09.2013 
500 |a published: Print-Electronic 
500 |a Citation Status MEDLINE 
520 |a We examine the visual effect of whole frame loss by different decoders. Whole frame losses are introduced in H.264/AVC compressed videos which are then decoded by two different decoders with different common concealment effects: frame copy and frame interpolation. The videos are seen by human observers who respond to each glitch they spot. We found that about 39% of whole frame losses of B frames are not observed by any of the subjects, and over 58% of the B frame losses are observed by 20% or fewer of the subjects. Using simple predictive features which can be calculated inside a network node with no access to the original video and no pixel level reconstruction of the frame, we developed models which can predict the visibility of whole B frame losses. The models are then used in a router to predict the visual impact of a frame loss and perform intelligent frame dropping to relieve network congestion. Dropping frames based on their visual scores proves superior to random dropping of B frames 
650 4 |a Journal Article 
700 1 |a Lin, Ting-Lan  |e verfasserin  |4 aut 
700 1 |a Cosman, Pamela C  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g 21(2012), 8 vom: 16. Aug., Seite 3353-63  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g volume:21  |g year:2012  |g number:8  |g day:16  |g month:08  |g pages:3353-63 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2012.2191567  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 21  |j 2012  |e 8  |b 16  |c 08  |h 3353-63