|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM333219767 |
003 |
DE-627 |
005 |
20231225221214.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2022 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TVCG.2021.3114848
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1110.xml
|
035 |
|
|
|a (DE-627)NLM333219767
|
035 |
|
|
|a (NLM)34784276
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Luo, Yuyu
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Natural Language to Visualization by Neural Machine Translation
|
264 |
|
1 |
|c 2022
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 04.01.2022
|
500 |
|
|
|a Date Revised 04.01.2022
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Supporting the translation from natural language (NL) query to visualization (NL2VIS) can simplify the creation of data visualizations because if successful, anyone can generate visualizations by their natural language from the tabular data. The state-of-the-art NL2VIS approaches (e.g., NL4DV and FlowSense) are based on semantic parsers and heuristic algorithms, which are not end-to-end and are not designed for supporting (possibly) complex data transformations. Deep neural network powered neural machine translation models have made great strides in many machine translation tasks, which suggests that they might be viable for NL2VIS as well. In this paper, we present ncNet, a Transformer-based sequence-to-sequence model for supporting NL2VIS, with several novel visualization-aware optimizations, including using attention-forcing to optimize the learning process, and visualization-aware rendering to produce better visualization results. To enhance the capability of machine to comprehend natural language queries, ncNet is also designed to take an optional chart template (e.g., a pie chart or a scatter plot) as an additional input, where the chart template will be served as a constraint to limit what could be visualized. We conducted both quantitative evaluation and user study, showing that ncNet achieves good accuracy in the nvBench benchmark and is easy-to-use
|
650 |
|
4 |
|a Journal Article
|
650 |
|
4 |
|a Research Support, Non-U.S. Gov't
|
700 |
1 |
|
|a Tang, Nan
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Li, Guoliang
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Tang, Jiawei
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Chai, Chengliang
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Qin, Xuedi
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on visualization and computer graphics
|d 1996
|g 28(2022), 1 vom: 16. Jan., Seite 217-226
|w (DE-627)NLM098269445
|x 1941-0506
|7 nnns
|
773 |
1 |
8 |
|g volume:28
|g year:2022
|g number:1
|g day:16
|g month:01
|g pages:217-226
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TVCG.2021.3114848
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 28
|j 2022
|e 1
|b 16
|c 01
|h 217-226
|