Learning to Recognize Actions on Objects in Egocentric Video With Attention Dictionaries

We present EgoACO, a deep neural architecture for video action recognition that learns to pool action-context-object descriptors from frame level features by leveraging the verb-noun structure of action labels in egocentric video datasets. The core component is class activation pooling (CAP), a diff...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 45(2023), 6 vom: 11. Juni, Seite 6674-6687
1. Verfasser: Sudhakaran, Swathikiran (VerfasserIn)
Weitere Verfasser: Escalera, Sergio, Lanz, Oswald
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM321311191
003 DE-627
005 20231225175535.0
007 cr uuu---uuuuu
008 231225s2023 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2021.3058649  |2 doi 
028 5 2 |a pubmed24n1071.xml 
035 |a (DE-627)NLM321311191 
035 |a (NLM)33571086 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Sudhakaran, Swathikiran  |e verfasserin  |4 aut 
245 1 0 |a Learning to Recognize Actions on Objects in Egocentric Video With Attention Dictionaries 
264 1 |c 2023 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 07.05.2023 
500 |a Date Revised 07.05.2023 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a We present EgoACO, a deep neural architecture for video action recognition that learns to pool action-context-object descriptors from frame level features by leveraging the verb-noun structure of action labels in egocentric video datasets. The core component is class activation pooling (CAP), a differentiable pooling layer that combines ideas from bilinear pooling for fine-grained recognition and from feature learning for discriminative localization. CAP uses self-attention with a dictionary of learnable weights to pool from the most relevant feature regions. Through CAP, EgoACO learns to decode object and scene context descriptors from video frame features. For temporal modeling we design a recurrent version of class activation pooling termed Long Short-Term Attention (LSTA). LSTA extends convolutional gated LSTM with built-in spatial attention and a re-designed output gate. Action, object and context descriptors are fused by a multi-head prediction that accounts for the inter-dependencies between noun-verb-action structured labels in egocentric video datasets. EgoACO features built-in visual explanations, helping learning and interpretation of discriminative information in video. Results on the two largest egocentric action recognition datasets currently available, EPIC-KITCHENS and EGTEA Gaze+, show that by decoding action-context-object descriptors, the model achieves state-of-the-art recognition performance 
650 4 |a Journal Article 
700 1 |a Escalera, Sergio  |e verfasserin  |4 aut 
700 1 |a Lanz, Oswald  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 45(2023), 6 vom: 11. Juni, Seite 6674-6687  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:45  |g year:2023  |g number:6  |g day:11  |g month:06  |g pages:6674-6687 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2021.3058649  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 45  |j 2023  |e 6  |b 11  |c 06  |h 6674-6687