Bridging Visual and Textual Semantics : Towards Consistency for Unbiased Scene Graph Generation
Scene Graph Generation (SGG) aims to detect visual relationships in an image. However, due to long-tailed bias, SGG is far from practical. Most methods depend heavily on the assistance of statistics co-occurrence to generate a balanced dataset, so they are dataset-specific and easily affected by noi...
Veröffentlicht in: | IEEE transactions on pattern analysis and machine intelligence. - 1979. - 46(2024), 11 vom: 15. Okt., Seite 7102-7119 |
---|---|
1. Verfasser: | |
Weitere Verfasser: | , , |
Format: | Online-Aufsatz |
Sprache: | English |
Veröffentlicht: |
2024
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on pattern analysis and machine intelligence |
Schlagworte: | Journal Article |
Zusammenfassung: | Scene Graph Generation (SGG) aims to detect visual relationships in an image. However, due to long-tailed bias, SGG is far from practical. Most methods depend heavily on the assistance of statistics co-occurrence to generate a balanced dataset, so they are dataset-specific and easily affected by noises. The fundamental cause is that SGG is simplified as a classification task instead of a reasoning task, thus the ability capturing the fine-grained details is limited and the difficulty in handling ambiguity is increased. By imitating the way of dual process in cognitive psychology, a Visual-Textual Semantics Consistency Network (VTSCN) is proposed to model the SGG task as a reasoning process, and relieve the long-tailed bias significantly. In VTSCN, as the rapid autonomous process (Type1 process), we design a Hybrid Union Representation (HUR) module, which is divided into two steps for spatial awareness and working memories modeling. In addition, as the higher order reasoning process (Type2 process), a Global Textual Semantics Modeling (GTS) module is designed to individually model the textual contexts with the word embeddings of pairwise objects. As the final associative process of cognition, a Heterogeneous Semantics Consistency (HSC) module is designed to balance the type1 process and the type2 process. Lastly, our VTSCN raises a new way for SGG model design by fully considering human cognitive process. Experiments on Visual Genome, GQA and PSG datasets show our method is superior to state-of-the-art methods, and ablation studies validate the effectiveness of our VTSCN |
---|---|
Beschreibung: | Date Revised 03.10.2024 published: Print-Electronic Citation Status PubMed-not-MEDLINE |
ISSN: | 1939-3539 |
DOI: | 10.1109/TPAMI.2024.3389030 |