Real-Time Action Recognition with Deeply-Transferred Motion Vector CNNs

The two-stream CNNs prove very successful for video based action recognition. However the classical two-stream CNNs are time costly, mainly due to the bottleneck of calculating optical flows. In this paper, we propose a two-stream based real-time action recognition approach by using motion vector to...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - (2018) vom: 08. Jan.
1. Verfasser: Zhang, Bowen (VerfasserIn)
Weitere Verfasser: Wang, Limin, Wang, Zhe, Qiao, Yu, Wang, Hanli
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2018
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM286364913
003 DE-627
005 20240229161830.0
007 cr uuu---uuuuu
008 231225s2018 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2018.2791180  |2 doi 
028 5 2 |a pubmed24n1308.xml 
035 |a (DE-627)NLM286364913 
035 |a (NLM)29993948 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Zhang, Bowen  |e verfasserin  |4 aut 
245 1 0 |a Real-Time Action Recognition with Deeply-Transferred Motion Vector CNNs 
264 1 |c 2018 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 27.02.2024 
500 |a published: Print-Electronic 
500 |a Citation Status Publisher 
520 |a The two-stream CNNs prove very successful for video based action recognition. However the classical two-stream CNNs are time costly, mainly due to the bottleneck of calculating optical flows. In this paper, we propose a two-stream based real-time action recognition approach by using motion vector to replace optical flow. Motion vectors are encoded in video stream and can be extracted directly without extra calculation. However directly training CNN with motion vectors degrades accuracy severely due to the noise and the lack of fine details in motion vectors. In order to relieve this problem, we propose four training strategies which leverage the knowledge learned from optical flow CNN to enhance the accuracy of motion vector CNN. Our insight is that motion vector and optical flow share inherent similar structures which allows us to transfer knowledge from one domain to another. To fully utilize the knowledge learned in optical flow domain, we develop deeply transferred motion vector CNN. Experimental results on various datasets show the effectiveness of our training strategies. Our approach is significantly faster than optical flow based approaches and achieves processing speed of 390.7 frames per second, surpassing real-time requirement. We release our model and code to facilitate further research 
650 4 |a Journal Article 
700 1 |a Wang, Limin  |e verfasserin  |4 aut 
700 1 |a Wang, Zhe  |e verfasserin  |4 aut 
700 1 |a Qiao, Yu  |e verfasserin  |4 aut 
700 1 |a Wang, Hanli  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g (2018) vom: 08. Jan.  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g year:2018  |g day:08  |g month:01 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2018.2791180  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |j 2018  |b 08  |c 01