DGaze : CNN-Based Gaze Prediction in Dynamic Scenes

We conduct novel analyses of users' gaze behaviors in dynamic virtual scenes and, based on our analyses, we present a novel CNN-based model called DGaze for gaze prediction in HMD-based applications. We first collect 43 users' eye tracking data in 5 dynamic scenes under free-viewing condit...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - 26(2020), 5 vom: 12. Mai, Seite 1902-1911
1. Verfasser: Hu, Zhiming (VerfasserIn)
Weitere Verfasser: Li, Sheng, Zhang, Congyi, Yi, Kangrui, Wang, Guoping, Manocha, Dinesh
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2020
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article Research Support, Non-U.S. Gov't
LEADER 01000naa a22002652 4500
001 NLM306649454
003 DE-627
005 20231225123835.0
007 cr uuu---uuuuu
008 231225s2020 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2020.2973473  |2 doi 
028 5 2 |a pubmed24n1022.xml 
035 |a (DE-627)NLM306649454 
035 |a (NLM)32070980 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Hu, Zhiming  |e verfasserin  |4 aut 
245 1 0 |a DGaze  |b CNN-Based Gaze Prediction in Dynamic Scenes 
264 1 |c 2020 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 01.04.2021 
500 |a Date Revised 01.04.2021 
500 |a published: Print-Electronic 
500 |a Citation Status MEDLINE 
520 |a We conduct novel analyses of users' gaze behaviors in dynamic virtual scenes and, based on our analyses, we present a novel CNN-based model called DGaze for gaze prediction in HMD-based applications. We first collect 43 users' eye tracking data in 5 dynamic scenes under free-viewing conditions. Next, we perform statistical analysis of our data and observe that dynamic object positions, head rotation velocities, and salient regions are correlated with users' gaze positions. Based on our analysis, we present a CNN-based model (DGaze) that combines object position sequence, head velocity sequence, and saliency features to predict users' gaze positions. Our model can be applied to predict not only realtime gaze positions but also gaze positions in the near future and can achieve better performance than prior method. In terms of realtime prediction, DGaze achieves a 22.0% improvement over prior method in dynamic scenes and obtains an improvement of 9.5% in static scenes, based on using the angular distance as the evaluation metric. We also propose a variant of our model called DGaze_ET that can be used to predict future gaze positions with higher precision by combining accurate past gaze data gathered using an eye tracker. We further analyze our CNN architecture and verify the effectiveness of each component in our model. We apply DGaze to gaze-contingent rendering and a game, and also present the evaluation results from a user study 
650 4 |a Journal Article 
650 4 |a Research Support, Non-U.S. Gov't 
700 1 |a Li, Sheng  |e verfasserin  |4 aut 
700 1 |a Zhang, Congyi  |e verfasserin  |4 aut 
700 1 |a Yi, Kangrui  |e verfasserin  |4 aut 
700 1 |a Wang, Guoping  |e verfasserin  |4 aut 
700 1 |a Manocha, Dinesh  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g 26(2020), 5 vom: 12. Mai, Seite 1902-1911  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnns 
773 1 8 |g volume:26  |g year:2020  |g number:5  |g day:12  |g month:05  |g pages:1902-1911 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2020.2973473  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 26  |j 2020  |e 5  |b 12  |c 05  |h 1902-1911