Human Action Recognition in Unconstrained Videos by Explicit Motion Modeling

Human action recognition in unconstrained videos is a challenging problem with many applications. Most state-of-the-art approaches adopted the well-known bag-of-features representations, generated based on isolated local patches or patch trajectories, where motion patterns, such as object-object and...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 24(2015), 11 vom: 15. Nov., Seite 3781-95
1. Verfasser: Jiang, Yu-Gang (VerfasserIn)
Weitere Verfasser: Dai, Qi, Liu, Wei, Xue, Xiangyang, Ngo, Chong-Wah
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2015
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article Research Support, Non-U.S. Gov't
LEADER 01000naa a22002652 4500
001 NLM250998262
003 DE-627
005 20231224161030.0
007 cr uuu---uuuuu
008 231224s2015 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2015.2456412  |2 doi 
028 5 2 |a pubmed24n0836.xml 
035 |a (DE-627)NLM250998262 
035 |a (NLM)26186774 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Jiang, Yu-Gang  |e verfasserin  |4 aut 
245 1 0 |a Human Action Recognition in Unconstrained Videos by Explicit Motion Modeling 
264 1 |c 2015 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 29.04.2016 
500 |a Date Revised 10.09.2015 
500 |a published: Print-Electronic 
500 |a Citation Status MEDLINE 
520 |a Human action recognition in unconstrained videos is a challenging problem with many applications. Most state-of-the-art approaches adopted the well-known bag-of-features representations, generated based on isolated local patches or patch trajectories, where motion patterns, such as object-object and object-background relationships are mostly discarded. In this paper, we propose a simple representation aiming at modeling these motion relationships. We adopt global and local reference points to explicitly characterize motion information, so that the final representation is more robust to camera movements, which widely exist in unconstrained videos. Our approach operates on the top of visual codewords generated on dense local patch trajectories, and therefore, does not require foreground-background separation, which is normally a critical and difficult step in modeling object relationships. Through an extensive set of experimental evaluations, we show that the proposed representation produces a very competitive performance on several challenging benchmark data sets. Further combining it with the standard bag-of-features or Fisher vector representations can lead to substantial improvements 
650 4 |a Journal Article 
650 4 |a Research Support, Non-U.S. Gov't 
700 1 |a Dai, Qi  |e verfasserin  |4 aut 
700 1 |a Liu, Wei  |e verfasserin  |4 aut 
700 1 |a Xue, Xiangyang  |e verfasserin  |4 aut 
700 1 |a Ngo, Chong-Wah  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g 24(2015), 11 vom: 15. Nov., Seite 3781-95  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g volume:24  |g year:2015  |g number:11  |g day:15  |g month:11  |g pages:3781-95 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2015.2456412  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 24  |j 2015  |e 11  |b 15  |c 11  |h 3781-95