Comparing Gaze, Head and Controller Selection of Dynamically Revealed Targets in Head-Mounted Displays
This paper presents a head-mounted virtual reality study that compared gaze, head, and controller pointing for selection of dynamically revealed targets. Existing studies on head-mounted 3D interaction have focused on pointing and selection tasks where all targets are visible to the user. Our study...
Publié dans: | IEEE transactions on visualization and computer graphics. - 1996. - 29(2023), 11 vom: 02. Nov., Seite 4740-4750 |
---|---|
Auteur principal: | |
Autres auteurs: | , , |
Format: | Article en ligne |
Langue: | English |
Publié: |
2023
|
Accès à la collection: | IEEE transactions on visualization and computer graphics |
Sujets: | Journal Article Research Support, Non-U.S. Gov't |
Résumé: | This paper presents a head-mounted virtual reality study that compared gaze, head, and controller pointing for selection of dynamically revealed targets. Existing studies on head-mounted 3D interaction have focused on pointing and selection tasks where all targets are visible to the user. Our study compared the effects of screen width (field of view), target amplitude and width, and prior knowledge of target location on modality performance. Results show that gaze and controller pointing are significantly faster than head pointing and that increased screen width only positively impacts performance up to a certain point. We further investigated the applicability of existing pointing models. Our analysis confirmed the suitability of previously proposed two-component models for all modalities while uncovering differences for gaze at known and unknown target positions. Our findings provide new empirical evidence for understanding input with gaze, head, and controller and are significant for applications that extend around the user |
---|---|
Description: | Date Completed 03.11.2023 Date Revised 13.11.2023 published: Print-Electronic Citation Status MEDLINE |
ISSN: | 1941-0506 |
DOI: | 10.1109/TVCG.2023.3320235 |