An Empirical Evaluation of the GPT-4 Multimodal Language Model on Visualization Literacy Tasks
Large Language Models (LLMs) like GPT-4 which support multimodal input (i.e., prompts containing images in addition to text) have immense potential to advance visualization research. However, many questions exist about the visual capabilities of such models, including how well they can read and inte...
Veröffentlicht in: | IEEE transactions on visualization and computer graphics. - 1996. - PP(2024) vom: 10. Sept. |
---|---|
1. Verfasser: | |
Weitere Verfasser: | |
Format: | Online-Aufsatz |
Sprache: | English |
Veröffentlicht: |
2024
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on visualization and computer graphics |
Schlagworte: | Journal Article |
Zusammenfassung: | Large Language Models (LLMs) like GPT-4 which support multimodal input (i.e., prompts containing images in addition to text) have immense potential to advance visualization research. However, many questions exist about the visual capabilities of such models, including how well they can read and interpret visually represented data. In our work, we address this question by evaluating the GPT-4 multimodal LLM using a suite of task sets meant to assess the model's visualization literacy. The task sets are based on existing work in the visualization community addressing both automated chart question answering and human visualization literacy across multiple settings. Our assessment finds that GPT-4 can perform tasks such as recognizing trends and extreme values, and also demonstrates some understanding of visualization design best-practices. By contrast, GPT-4 struggles with simple value retrieval when not provided with the original dataset, lacks the ability to reliably distinguish between colors in charts, and occasionally suffers from hallucination and inconsistency. We conclude by reflecting on the model's strengths and weaknesses as well as the potential utility of models like GPT-4 for future visualization research. We also release all code, stimuli, and results for the task sets at the following link: https://doi.org/10.17605/OSF.IO/F39J6 |
---|---|
Beschreibung: | Date Revised 18.09.2024 published: Print-Electronic Citation Status Publisher |
ISSN: | 1941-0506 |
DOI: | 10.1109/TVCG.2024.3456155 |