Rethinking Motion Representation : Residual Frames With 3D ConvNets

Recently, 3D convolutional networks yield good performance in action recognition. However, an optical flow stream is still needed for motion representation to ensure better performance, whose cost is very high. In this paper, we propose a cheap but effective way to extract motion features from video...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 30(2021) vom: 04., Seite 9231-9244
1. Verfasser: Tao, Li (VerfasserIn)
Weitere Verfasser: Wang, Xueting, Yamasaki, Toshihiko
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2021
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM33273448X
003 DE-627
005 20231225220202.0
007 cr uuu---uuuuu
008 231225s2021 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2021.3124156  |2 doi 
028 5 2 |a pubmed24n1109.xml 
035 |a (DE-627)NLM33273448X 
035 |a (NLM)34735344 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Tao, Li  |e verfasserin  |4 aut 
245 1 0 |a Rethinking Motion Representation  |b Residual Frames With 3D ConvNets 
264 1 |c 2021 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 11.11.2021 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Recently, 3D convolutional networks yield good performance in action recognition. However, an optical flow stream is still needed for motion representation to ensure better performance, whose cost is very high. In this paper, we propose a cheap but effective way to extract motion features from videos utilizing residual frames as the input data in 3D ConvNets. By replacing traditional stacked RGB frames with residual ones, 35.6% and 26.6% points improvements over top-1 accuracy can be achieved on the UCF101 and HMDB51 datasets when trained from scratch using ResNet-18-3D. We deeply analyze the effectiveness of this modality compared to normal RGB video clips, and find that better motion features can be extracted using residual frames with 3D ConvNets. Considering that residual frames contain little information of object appearance, we further use a 2D convolutional network to extract appearance features and combine them together to form a two-path solution. In this way, we can achieve better performance than some methods which even used an additional optical flow stream. Moreover, the proposed residual-input path can outperform RGB counterpart on unseen datasets when we apply trained models to video retrieval tasks. Huge improvements can also be obtained when the residual inputs are applied to video-based self-supervised learning methods, revealing better motion representation and generalization ability of our proposal 
650 4 |a Journal Article 
700 1 |a Wang, Xueting  |e verfasserin  |4 aut 
700 1 |a Yamasaki, Toshihiko  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g 30(2021) vom: 04., Seite 9231-9244  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g volume:30  |g year:2021  |g day:04  |g pages:9231-9244 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2021.3124156  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 30  |j 2021  |b 04  |h 9231-9244