Discriminative Cross-Modality Attention Network for Temporal Inconsistent Audio-Visual Event Localization

It is theoretically insufficient to construct a complete set of semantics in the real world using single-modality data. As a typical application of multi-modality perception, the audio-visual event localization task aims to match audio and visual components to identify the simultaneous events of int...

Description complète

Détails bibliographiques
Publié dans:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 30(2021) vom: 15., Seite 7878-7888
Auteur principal: Xuan, Hanyu (Auteur)
Autres auteurs: Luo, Lei, Zhang, Zhenyu, Yang, Jian, Yan, Yan
Format: Article en ligne
Langue:English
Publié: 2021
Accès à la collection:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Sujets:Journal Article
Description
Résumé:It is theoretically insufficient to construct a complete set of semantics in the real world using single-modality data. As a typical application of multi-modality perception, the audio-visual event localization task aims to match audio and visual components to identify the simultaneous events of interest. Although some recent methods have been proposed to deal with this task, they cannot handle the practical situation of temporal inconsistency that is widespread in the audio-visual scene. Inspired by the human system which automatically filters out event-unrelated information when performing multi-modality perception, we propose a discriminative cross-modality attention network to simulate such a process. Similar to human mechanism, our network can adaptively select "where" to attend, "when" to attend and "which" to attend for audio-visual event localization. In addition, to prevent our network from getting trivial solutions, a novel eigenvalue-based objective function is proposed to train the whole network to better fuse audio and visual signals, which can obtain discriminative and nonlinear multi-modality representation. In this way, even with large temporal inconsistency between audio and visual sequence, our network is able to adaptively select event-valuable information for audio-visual event localization. Furthermore, we systemically investigate three subtasks of audio-visual event localization, i.e., temporal localization, weakly-supervised spatial localization and cross-modality localization. The visualization results also help us better understand how our network works
Description:Date Completed 21.09.2021
Date Revised 21.09.2021
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2021.3106814