|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM239698908 |
003 |
DE-627 |
005 |
20231224120524.0 |
007 |
cr uuu---uuuuu |
008 |
231224s2014 xx |||||o 00| ||eng c |
028 |
5 |
2 |
|a pubmed24n0799.xml
|
035 |
|
|
|a (DE-627)NLM239698908
|
035 |
|
|
|a (NLM)24983106
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Wan, Jun
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a CSMMI
|b class-specific maximization of mutual information for action and gesture recognition
|
264 |
|
1 |
|c 2014
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 28.10.2015
|
500 |
|
|
|a Date Revised 27.10.2019
|
500 |
|
|
|a published: Print
|
500 |
|
|
|a Citation Status MEDLINE
|
520 |
|
|
|a In this paper, we propose a novel approach called class-specific maximization of mutual information (CSMMI) using a submodular method, which aims at learning a compact and discriminative dictionary for each class. Unlike traditional dictionary-based algorithms, which typically learn a shared dictionary for all of the classes, we unify the intraclass and interclass mutual information (MI) into an single objective function to optimize class-specific dictionary. The objective function has two aims: 1) maximizing the MI between dictionary items within a specific class (intrinsic structure) and 2) minimizing the MI between the dictionary items in a given class and those of the other classes (extrinsic structure). We significantly reduce the computational complexity of CSMMI by introducing an novel submodular method, which is one of the important contributions of this paper. This paper also contributes a state-of-the-art end-to-end system for action and gesture recognition incorporating CSMMI, with feature extraction, learning initial dictionary per each class by sparse coding, CSMMI via submodularity, and classification based on reconstruction errors. We performed extensive experiments on synthetic data and eight benchmark data sets. Our experimental results show that CSMMI outperforms shared dictionary methods and that our end-to-end system is competitive with other state-of-the-art approaches
|
650 |
|
4 |
|a Journal Article
|
650 |
|
4 |
|a Research Support, Non-U.S. Gov't
|
700 |
1 |
|
|a Athitsos, Vassilis
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Jangyodsuk, Pat
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Escalante, Hugo Jair
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Ruan, Qiuqi
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Guyon, Isabelle
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 23(2014), 7 vom: 21. Juli, Seite 3152-65
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:23
|g year:2014
|g number:7
|g day:21
|g month:07
|g pages:3152-65
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 23
|j 2014
|e 7
|b 21
|c 07
|h 3152-65
|