|
|
|
|
LEADER |
01000caa a22002652 4500 |
001 |
NLM305432907 |
003 |
DE-627 |
005 |
20240229162457.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2020 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2020.2965306
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1308.xml
|
035 |
|
|
|a (DE-627)NLM305432907
|
035 |
|
|
|a (NLM)31944977
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Ding, Lin
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Joint Coding of Local and Global Deep Features in Videos for Visual Search
|
264 |
|
1 |
|c 2020
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 27.02.2024
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status Publisher
|
520 |
|
|
|a Practically, it is more feasible to collect compact visual features rather than the video streams from hundreds of thousands of cameras into the cloud for big data analysis and retrieval. Then the problem becomes which kinds of features should be extracted, compressed and transmitted so as to meet the requirements of various visual tasks. Recently, many studies have indicated that the activations from the convolutional layers in convolutional neural networks (CNNs) can be treated as local deep features describing particular details inside an image region, which are then aggregated (e.g., using Fisher Vectors) as a powerful global descriptor. Combination of local and global features can satisfy those various needs effectively. It has also been validated that, if only local deep features are coded and transmitted to the cloud while the global features are recovered using the decoded local features, the aggregated global features should be lossy and consequently would degrade the overall performance. Therefore, this paper proposes a joint coding framework for local and global deep features (DFJC) extracted from videos. In this framework, we introduce a coding scheme for real-valued local and global deep features with intra-frame lossy coding and inter-frame reference coding. The theoretical analysis is performed to understand how the number of inliers varies with the number of local features. Moreover, the inter-feature correlations are exploited in our framework. That is, local feature coding can be accelerated by making use of the frame types determined with global features, while the lossy global features aggregated with the decoded local features can be used as a reference for global feature coding. Extensive experimental results under three metrics show that our DFJC framework can significantly reduce the bitrate of local and global deep features from videos while maintaining the retrieval performance
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Tian, Yonghong
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Fan, Hongfei
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Chen, Changhuai
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Huang, Tiejun
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g (2020) vom: 15. Jan.
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g year:2020
|g day:15
|g month:01
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2020.2965306
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|j 2020
|b 15
|c 01
|