EHTask : Recognizing User Tasks From Eye and Head Movements in Immersive Virtual Reality

Understanding human visual attention in immersive virtual reality (VR) is crucial for many important applications, including gaze prediction, gaze guidance, and gaze-contingent rendering. However, previous works on visual attention analysis typically only explored one specific VR task and paid less...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - 29(2023), 4 vom: 03. Apr., Seite 1992-2004
1. Verfasser: Hu, Zhiming (VerfasserIn)
Weitere Verfasser: Bulling, Andreas, Li, Sheng, Wang, Guoping
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article Research Support, Non-U.S. Gov't
LEADER 01000naa a22002652 4500
001 NLM334987067
003 DE-627
005 20231225224847.0
007 cr uuu---uuuuu
008 231225s2023 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2021.3138902  |2 doi 
028 5 2 |a pubmed24n1116.xml 
035 |a (DE-627)NLM334987067 
035 |a (NLM)34962869 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Hu, Zhiming  |e verfasserin  |4 aut 
245 1 0 |a EHTask  |b Recognizing User Tasks From Eye and Head Movements in Immersive Virtual Reality 
264 1 |c 2023 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 11.04.2023 
500 |a Date Revised 03.05.2023 
500 |a published: Print-Electronic 
500 |a Citation Status MEDLINE 
520 |a Understanding human visual attention in immersive virtual reality (VR) is crucial for many important applications, including gaze prediction, gaze guidance, and gaze-contingent rendering. However, previous works on visual attention analysis typically only explored one specific VR task and paid less attention to the differences between different tasks. Moreover, existing task recognition methods typically focused on 2D viewing conditions and only explored the effectiveness of human eye movements. We first collect eye and head movements of 30 participants performing four tasks, i.e., Free viewing, Visual search, Saliency, and Track, in 15 360-degree VR videos. Using this dataset, we analyze the patterns of human eye and head movements and reveal significant differences across different tasks in terms of fixation duration, saccade amplitude, head rotation velocity, and eye-head coordination. We then propose EHTask - a novel learning-based method that employs eye and head movements to recognize user tasks in VR. We show that our method significantly outperforms the state-of-the-art methods derived from 2D viewing conditions both on our dataset (accuracy of 84.4% versus 62.8%) and on a real-world dataset ( 61.9% versus 44.1%). As such, our work provides meaningful insights into human visual attention under different VR tasks and guides future work on recognizing user tasks in VR 
650 4 |a Journal Article 
650 4 |a Research Support, Non-U.S. Gov't 
700 1 |a Bulling, Andreas  |e verfasserin  |4 aut 
700 1 |a Li, Sheng  |e verfasserin  |4 aut 
700 1 |a Wang, Guoping  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g 29(2023), 4 vom: 03. Apr., Seite 1992-2004  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnns 
773 1 8 |g volume:29  |g year:2023  |g number:4  |g day:03  |g month:04  |g pages:1992-2004 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2021.3138902  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 29  |j 2023  |e 4  |b 03  |c 04  |h 1992-2004