Adaptive Prototype Learning for Weakly-supervised Temporal Action Localization

Weakly-supervised Temporal Action Localization (WTAL) aims to localize action instances with only video-level labels during training, where two primary issues are localization incompleteness and background interference. To relieve these two issues, recent methods adopt an attention mechanism to acti...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - PP(2024) vom: 20. Aug.
1. Verfasser: Luo, Wang (VerfasserIn)
Weitere Verfasser: Ren, Huan, Zhangd, Tianzhu, Yang, Wenfei, Zhang, Yongdong
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM376504870
003 DE-627
005 20240823232755.0
007 cr uuu---uuuuu
008 240821s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2024.3431915  |2 doi 
028 5 2 |a pubmed24n1510.xml 
035 |a (DE-627)NLM376504870 
035 |a (NLM)39163178 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Luo, Wang  |e verfasserin  |4 aut 
245 1 0 |a Adaptive Prototype Learning for Weakly-supervised Temporal Action Localization 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 23.08.2024 
500 |a published: Print-Electronic 
500 |a Citation Status Publisher 
520 |a Weakly-supervised Temporal Action Localization (WTAL) aims to localize action instances with only video-level labels during training, where two primary issues are localization incompleteness and background interference. To relieve these two issues, recent methods adopt an attention mechanism to activate action instances and simultaneously suppress background ones, which have achieved remarkable progress. Nevertheless, we argue that these two issues have not been well resolved yet. On the one hand, the attention mechanism adopts fixed weights for different videos, which are incapable of handling the diversity of different videos, thus deficient in addressing the problem of localization incompleteness. On the other hand, previous methods only focus on learning the foreground attention and the attention weights usually suffer from ambiguity, resulting in difficulty of suppressing background interference. To deal with the above issues, in this paper we propose an Adaptive Prototype Learning (APL) method for WTAL, which includes two key designs: (1) an Adaptive Transformer Network (ATN) to explicitly model background and learn video-adaptive prototypes for each specific video, (2) an OT-based Collaborative (OTC) training strategy to guide the learning of prototypes and remove the ambiguity of the foreground-background separation by introducing an Optimal Transport (OT) algorithm into the collaborative training scheme between RGB and FLOW streams. These two key designs can work together to learn video-adaptive prototypes and solve the above two issues, achieving robust localization. Extensive experimental results on two standard benchmarks (THUMOS14 and ActivityNet) demonstrate that our proposed APL performs favorably against state-of-the-art methods 
650 4 |a Journal Article 
700 1 |a Ren, Huan  |e verfasserin  |4 aut 
700 1 |a Zhangd, Tianzhu  |e verfasserin  |4 aut 
700 1 |a Yang, Wenfei  |e verfasserin  |4 aut 
700 1 |a Zhang, Yongdong  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g PP(2024) vom: 20. Aug.  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g volume:PP  |g year:2024  |g day:20  |g month:08 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2024.3431915  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d PP  |j 2024  |b 20  |c 08