KD-INR : Time-Varying Volumetric Data Compression via Knowledge Distillation-Based Implicit Neural Representation

Traditional deep learning algorithms assume that all data is available during training, which presents challenges when handling large-scale time-varying data. To address this issue, we propose a data reduction pipeline called knowledge distillation-based implicit neural representation (KD-INR) for c...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - 30(2024), 10 vom: 21. Sept., Seite 6826-6838
1. Verfasser: Han, Jun (VerfasserIn)
Weitere Verfasser: Zheng, Hao, Bi, Chongke
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM366182609
003 DE-627
005 20240906232537.0
007 cr uuu---uuuuu
008 231227s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2023.3345373  |2 doi 
028 5 2 |a pubmed24n1525.xml 
035 |a (DE-627)NLM366182609 
035 |a (NLM)38127599 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Han, Jun  |e verfasserin  |4 aut 
245 1 0 |a KD-INR  |b Time-Varying Volumetric Data Compression via Knowledge Distillation-Based Implicit Neural Representation 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 05.09.2024 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Traditional deep learning algorithms assume that all data is available during training, which presents challenges when handling large-scale time-varying data. To address this issue, we propose a data reduction pipeline called knowledge distillation-based implicit neural representation (KD-INR) for compressing large-scale time-varying data. The approach consists of two stages: spatial compression and model aggregation. In the first stage, each time step is compressed using an implicit neural representation with bottleneck layers and features of interest preservation-based sampling. In the second stage, we utilize an offline knowledge distillation algorithm to extract knowledge from the trained models and aggregate it into a single model. We evaluated our approach on a variety of time-varying volumetric data sets. Both quantitative and qualitative results, such as PSNR, LPIPS, and rendered images, demonstrate that KD-INR surpasses the state-of-the-art approaches, including learning-based (i.e., CoordNet, NeurComp, and SIREN) and lossy compression (i.e., SZ3, ZFP, and TTHRESH) methods, at various compression ratios ranging from hundreds to ten thousand 
650 4 |a Journal Article 
700 1 |a Zheng, Hao  |e verfasserin  |4 aut 
700 1 |a Bi, Chongke  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g 30(2024), 10 vom: 21. Sept., Seite 6826-6838  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnns 
773 1 8 |g volume:30  |g year:2024  |g number:10  |g day:21  |g month:09  |g pages:6826-6838 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2023.3345373  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 30  |j 2024  |e 10  |b 21  |c 09  |h 6826-6838