|
|
|
|
LEADER |
01000caa a22002652 4500 |
001 |
NLM368399540 |
003 |
DE-627 |
005 |
20240606232355.0 |
007 |
cr uuu---uuuuu |
008 |
240214s2024 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2024.3365104
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1430.xml
|
035 |
|
|
|a (DE-627)NLM368399540
|
035 |
|
|
|a (NLM)38349824
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Tu, Yunbin
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a SMART
|b Syntax-Calibrated Multi-Aspect Relation Transformer for Change Captioning
|
264 |
|
1 |
|c 2024
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 06.06.2024
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Change captioning aims to describe the semantic change between two similar images. In this process, as the most typical distractor, viewpoint change leads to the pseudo changes about appearance and position of objects, thereby overwhelming the real change. Besides, since the visual signal of change appears in a local region with weak feature, it is difficult for the model to directly translate the learned change features into the sentence. In this paper, we propose a syntax-calibrated multi-aspect relation transformer to learn effective change features under different scenes, and build reliable cross-modal alignment between the change features and linguistic words during caption generation. Specifically, a multi-aspect relation learning network is designed to 1) explore the fine-grained changes under irrelevant distractors (e.g., viewpoint change) by embedding the relations of semantics and relative position into the features of each image; 2) learn two view-invariant image representations by strengthening their global contrastive alignment relation, so as to help capture a stable difference representation; 3) provide the model with the prior knowledge about whether and where the semantic change happened by measuring the relation between the representations of captured difference and the image pair. Through the above manner, the model can learn effective change features for caption generation. Further, we introduce the syntax knowledge of Part-of-Speech (POS) and devise a POS-based visual switch to calibrate the transformer decoder. The POS-based visual switch dynamically utilizes visual information during different word generation based on the POS of words. This enables the decoder to build reliable cross-modal alignment, so as to generate a high-level linguistic sentence about change. Extensive experiments show that the proposed method achieves the state-of-the-art performance on the three public datasets
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Li, Liang
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Su, Li
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zha, Zheng-Jun
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Huang, Qingming
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 46(2024), 7 vom: 01. Juni, Seite 4926-4943
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:46
|g year:2024
|g number:7
|g day:01
|g month:06
|g pages:4926-4943
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2024.3365104
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 46
|j 2024
|e 7
|b 01
|c 06
|h 4926-4943
|