|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM321653718 |
003 |
DE-627 |
005 |
20231225180334.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2021 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2021.3058615
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1072.xml
|
035 |
|
|
|a (DE-627)NLM321653718
|
035 |
|
|
|a (NLM)33606630
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Chen, Tong
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a End-to-End Learnt Image Compression via Non-Local Attention Optimization and Improved Context Modeling
|
264 |
|
1 |
|c 2021
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 26.02.2021
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a This article proposes an end-to-end learnt lossy image compression approach, which is built on top of the deep nerual network (DNN)-based variational auto-encoder (VAE) structure with Non-Local Attention optimization and Improved Context modeling (NLAIC). Our NLAIC 1) embeds non-local network operations as non-linear transforms in both main and hyper coders for deriving respective latent features and hyperpriors by exploiting both local and global correlations, 2) applies attention mechanism to generate implicit masks that are used to weigh the features for adaptive bit allocation, and 3) implements the improved conditional entropy modeling of latent features using joint 3D convolutional neural network (CNN)-based autoregressive contexts and hyperpriors. Towards the practical application, additional enhancements are also introduced to speed up the computational processing (e.g., parallel 3D CNN-based context prediction), decrease the memory consumption (e.g., sparse non-local processing) and reduce the implementation complexity (e.g., a unified model for variable rates without re-training). The proposed model outperforms existing learnt and conventional (e.g., BPG, JPEG2000, JPEG) image compression methods, on both Kodak and Tecnick datasets with the state-of-the-art compression efficiency, for both PSNR and MS-SSIM quality measurements. We have made all materials publicly accessible at https://njuvision.github.io/NIC for reproducible research
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Liu, Haojie
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Ma, Zhan
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Shen, Qiu
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Cao, Xun
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wang, Yao
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 30(2021) vom: 01., Seite 3179-3191
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:30
|g year:2021
|g day:01
|g pages:3179-3191
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2021.3058615
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 30
|j 2021
|b 01
|h 3179-3191
|