Precueing Object Placement and Orientation for Manual Tasks in Augmented Reality

When a user is performing a manual task, AR or VR can provide information about the current subtask (cueing) and upcoming subtasks (precueing) that makes them easier and faster to complete. Previous research on cueing and precueing in AR and VR has focused on path-following tasks requiring simple ac...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - 28(2022), 11 vom: 01. Nov., Seite 3799-3809
1. Verfasser: Liu, Jen-Shuo (VerfasserIn)
Weitere Verfasser: Tversky, Barbara, Feiner, Steven
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2022
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article Research Support, U.S. Gov't, Non-P.H.S.
LEADER 01000naa a22002652 4500
001 NLM345662873
003 DE-627
005 20231226025602.0
007 cr uuu---uuuuu
008 231226s2022 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2022.3203111  |2 doi 
028 5 2 |a pubmed24n1152.xml 
035 |a (DE-627)NLM345662873 
035 |a (NLM)36049002 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Liu, Jen-Shuo  |e verfasserin  |4 aut 
245 1 0 |a Precueing Object Placement and Orientation for Manual Tasks in Augmented Reality 
264 1 |c 2022 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 31.10.2022 
500 |a Date Revised 15.11.2022 
500 |a published: Print-Electronic 
500 |a Citation Status MEDLINE 
520 |a When a user is performing a manual task, AR or VR can provide information about the current subtask (cueing) and upcoming subtasks (precueing) that makes them easier and faster to complete. Previous research on cueing and precueing in AR and VR has focused on path-following tasks requiring simple actions at each of a series of locations, such as pushing a button or just visiting. We consider a more complex task, whose subtasks involve moving to and picking up an item, moving that item to a designated place while rotating it to a specific angle, and depositing it. We conducted two user studies to examine how people accomplish this task while wearing an AR headset, guided by different visualizations that cue and precue movement and rotation. Participants performed best when given movement information for two successive subtasks and rotation information for a single subtask. In addition, participants performed best when the rotation visualization was split across the manipulated object and its destination 
650 4 |a Journal Article 
650 4 |a Research Support, U.S. Gov't, Non-P.H.S. 
700 1 |a Tversky, Barbara  |e verfasserin  |4 aut 
700 1 |a Feiner, Steven  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g 28(2022), 11 vom: 01. Nov., Seite 3799-3809  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnns 
773 1 8 |g volume:28  |g year:2022  |g number:11  |g day:01  |g month:11  |g pages:3799-3809 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2022.3203111  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 28  |j 2022  |e 11  |b 01  |c 11  |h 3799-3809