|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM355201755 |
003 |
DE-627 |
005 |
20231226063839.0 |
007 |
cr uuu---uuuuu |
008 |
231226s2023 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2022.3230934
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1183.xml
|
035 |
|
|
|a (DE-627)NLM355201755
|
035 |
|
|
|a (NLM)37015429
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Li, Tengpeng
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Knowledge-Enriched Attention Network With Group-Wise Semantic for Visual Storytelling
|
264 |
|
1 |
|c 2023
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 06.06.2023
|
500 |
|
|
|a Date Revised 06.06.2023
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a As a technically challenging topic, visual storytelling aims at generating an imaginary and coherent story with narrative multi-sentences from a group of relevant images. Existing methods often generate direct and rigid descriptions of apparent image-based contents, because they are not capable of exploring implicit information beyond images. Hence, these schemes could not capture consistent dependencies from holistic representation, impairing the generation of reasonable and fluent stories. To address these problems, a novel knowledge-enriched attention network with group-wise semantic model is proposed. Three main novel components are designed and supported by substantial experiments to reveal practical advantages. First, a knowledge-enriched attention network is designed to extract implicit concepts from external knowledge system, and these concepts are followed by a cascade cross-modal attention mechanism to characterize imaginative and concrete representations. Second, a group-wise semantic module with second-order pooling is developed to explore the globally consistent guidance. Third, a unified one-stage story generation model with encoder-decoder structure is proposed to simultaneously train and infer the knowledge-enriched attention network, group-wise semantic module and multi-modal story generation decoder in an end-to-end fashion. Substantial experiments on the visual storytelling datasets with both objective and subjective evaluation metrics demonstrate the superior performance of the proposed scheme as compared with other state-of-the-art methods. The source code of this work can be found in https://mic.tongji.edu.cn
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Wang, Hanli
|e verfasserin
|4 aut
|
700 |
1 |
|
|a He, Bin
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Chen, Chang Wen
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 45(2023), 7 vom: 20. Juli, Seite 8634-8645
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:45
|g year:2023
|g number:7
|g day:20
|g month:07
|g pages:8634-8645
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2022.3230934
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 45
|j 2023
|e 7
|b 20
|c 07
|h 8634-8645
|