|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM21552392X |
003 |
DE-627 |
005 |
20231224025709.0 |
007 |
cr uuu---uuuuu |
008 |
231224s2012 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2012.2188038
|2 doi
|
028 |
5 |
2 |
|a pubmed24n0718.xml
|
035 |
|
|
|a (DE-627)NLM21552392X
|
035 |
|
|
|a (NLM)22345543
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Jiang, Yu-Gang
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Fast semantic diffusion for large-scale context-based image and video annotation
|
264 |
|
1 |
|c 2012
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 04.09.2012
|
500 |
|
|
|a Date Revised 16.05.2012
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Exploring context information for visual recognition has recently received significant research attention. This paper proposes a novel and highly efficient approach, which is named semantic diffusion, to utilize semantic context for large-scale image and video annotation. Starting from the initial annotation of a large number of semantic concepts (categories), obtained by either machine learning or manual tagging, the proposed approach refines the results using a graph diffusion technique, which recovers the consistency and smoothness of the annotations over a semantic graph. Different from the existing graph-based learning methods that model relations among data samples, the semantic graph captures context by treating the concepts as nodes and the concept affinities as the weights of edges. In particular, our approach is capable of simultaneously improving annotation accuracy and adapting the concept affinities to new test data. The adaptation provides a means to handle domain change between training and test data, which often occurs in practice. Extensive experiments are conducted to improve concept annotation results using Flickr images and TV program videos. Results show consistent and significant performance gain (10 +% on both image and video data sets). Source codes of the proposed algorithms are available online
|
650 |
|
4 |
|a Journal Article
|
650 |
|
4 |
|a Research Support, Non-U.S. Gov't
|
650 |
|
4 |
|a Research Support, U.S. Gov't, Non-P.H.S.
|
700 |
1 |
|
|a Dai, Qi
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wang, Jun
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Ngo, Chong-Wah
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Xue, Xiangyang
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Chang, Shih-Fu
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 21(2012), 6 vom: 15. Juni, Seite 3080-91
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:21
|g year:2012
|g number:6
|g day:15
|g month:06
|g pages:3080-91
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2012.2188038
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 21
|j 2012
|e 6
|b 15
|c 06
|h 3080-91
|