Real-Time Action Recognition with Deeply-Transferred Motion Vector CNNs

The two-stream CNNs prove very successful for video based action recognition. However the classical two-stream CNNs are time costly, mainly due to the bottleneck of calculating optical flows. In this paper, we propose a two-stream based real-time action recognition approach by using motion vector to...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - (2018) vom: 08. Jan.
1. Verfasser: Zhang, Bowen (VerfasserIn)
Weitere Verfasser: Wang, Limin, Wang, Zhe, Qiao, Yu, Wang, Hanli
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2018
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:The two-stream CNNs prove very successful for video based action recognition. However the classical two-stream CNNs are time costly, mainly due to the bottleneck of calculating optical flows. In this paper, we propose a two-stream based real-time action recognition approach by using motion vector to replace optical flow. Motion vectors are encoded in video stream and can be extracted directly without extra calculation. However directly training CNN with motion vectors degrades accuracy severely due to the noise and the lack of fine details in motion vectors. In order to relieve this problem, we propose four training strategies which leverage the knowledge learned from optical flow CNN to enhance the accuracy of motion vector CNN. Our insight is that motion vector and optical flow share inherent similar structures which allows us to transfer knowledge from one domain to another. To fully utilize the knowledge learned in optical flow domain, we develop deeply transferred motion vector CNN. Experimental results on various datasets show the effectiveness of our training strategies. Our approach is significantly faster than optical flow based approaches and achieves processing speed of 390.7 frames per second, surpassing real-time requirement. We release our model and code to facilitate further research
Beschreibung:Date Revised 27.02.2024
published: Print-Electronic
Citation Status Publisher
ISSN:1941-0042
DOI:10.1109/TIP.2018.2791180