|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM346893275 |
003 |
DE-627 |
005 |
20231226032533.0 |
007 |
cr uuu---uuuuu |
008 |
231226s2023 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2022.3210652
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1156.xml
|
035 |
|
|
|a (DE-627)NLM346893275
|
035 |
|
|
|a (NLM)36173774
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Hu, Zhihao
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a FVC
|b An End-to-End Framework Towards Deep Video Compression in Feature Space
|
264 |
|
1 |
|c 2023
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 10.04.2023
|
500 |
|
|
|a Date Revised 10.04.2023
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Deep video compression is attracting increasing attention from both deep learning and video processing community. Recent learning-based approaches follow the hybrid coding paradigm to perform pixel space operations for reducing redundancy along both spatial and temporal dimentions, which leads to inaccurate motion estimation or less effective motion compensation. In this work, we propose a feature-space video coding framework (FVC), which performs all major operations (i.e., motion estimation, motion compression, motion compensation and residual compression) in the feature space. Specifically, a new deformable compensation module, which consists of motion estimation, motion compression and motion compensation, is proposed for more effective motion compensation. In our deformable compensation module, we first perform motion estimation in the feature space to produce the motion information (i.e., the offset maps). Then the motion information is compressed by using the auto-encoder style network. After that, we use the deformable convolution operation to generate the predicted feature for motion compensation. Finally, the residual information between the feature from the current frame and the predicted feature from the deformable compensation module is also compressed in the feature space. Motivated by the conventional codecs, in which the blocks with different sizes are used for motion estimation, we additionally propose two new modules called resolution-adaptive motion coding (RaMC) and resolution-adaptive residual coding (RaRC) to automatically cope with different types of motion and residual patterns at different spatial locations. Comprehensive experimental results demonstrate that our proposed framework achieves the state-of-the-art performance on three benchmark datasets including HEVC, UVG and MCL-JCV
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Xu, Dong
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Lu, Guo
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Jiang, Wei
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wang, Wei
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Liu, Shan
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 45(2023), 4 vom: 01. Apr., Seite 4569-4585
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:45
|g year:2023
|g number:4
|g day:01
|g month:04
|g pages:4569-4585
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2022.3210652
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 45
|j 2023
|e 4
|b 01
|c 04
|h 4569-4585
|