A Scanner Deeply : Predicting Gaze Heatmaps On Visualizations Using Crowdsourced Eye Movement Data

Visual perception is a key component of data visualization. Much prior empirical work uses eye movement as a proxy to understand human visual perception. Diverse apparatus and techniques have been proposed to collect eye movements, but there is still no optimal approach. In this paper, we review 30...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - PP(2022) vom: 27. Sept.
1. Verfasser: Shin, Sungbok (VerfasserIn)
Weitere Verfasser: Chung, Sunghyo, Hong, Sanghyun, Elmqvist, Niklas
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2022
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM346821509
003 DE-627
005 20240217232102.0
007 cr uuu---uuuuu
008 231226s2022 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2022.3209472  |2 doi 
028 5 2 |a pubmed24n1297.xml 
035 |a (DE-627)NLM346821509 
035 |a (NLM)36166520 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Shin, Sungbok  |e verfasserin  |4 aut 
245 1 2 |a A Scanner Deeply  |b Predicting Gaze Heatmaps On Visualizations Using Crowdsourced Eye Movement Data 
264 1 |c 2022 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 16.02.2024 
500 |a published: Print-Electronic 
500 |a Citation Status Publisher 
520 |a Visual perception is a key component of data visualization. Much prior empirical work uses eye movement as a proxy to understand human visual perception. Diverse apparatus and techniques have been proposed to collect eye movements, but there is still no optimal approach. In this paper, we review 30 prior works for collecting eye movements based on three axes: (1) the tracker technology used to measure eye movements; (2) the image stimulus shown to participants; and (3) the collection methodology used to gather the data. Based on this taxonomy, we employ a webcam-based eyetracking approach using task-specific visualizations as the stimulus. The low technology requirement means that virtually anyone can participate, thus enabling us to collect data at large scale using crowdsourcing: approximately 12,000 samples in total. Choosing visualization images as stimulus means that the eye movements will be specific to perceptual tasks associated with visualization. We use these data to propose a SCANNER DEEPLY, a virtual eyetracker model that, given an image of a visualization, generates a gaze heatmap for that image. We employ a computationally efficient, yet powerful convolutional neural network for our model. We compare the results of our work with results from the DVS model and a neural network trained on the Salicon dataset. The analysis of our gaze patterns enables us to understand how users grasp the structure of visualized data. We also make our stimulus dataset of visualization images available as part of this paper's contribution 
650 4 |a Journal Article 
700 1 |a Chung, Sunghyo  |e verfasserin  |4 aut 
700 1 |a Hong, Sanghyun  |e verfasserin  |4 aut 
700 1 |a Elmqvist, Niklas  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g PP(2022) vom: 27. Sept.  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnns 
773 1 8 |g volume:PP  |g year:2022  |g day:27  |g month:09 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2022.3209472  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d PP  |j 2022  |b 27  |c 09