"May I Speak?" : Multi-modal Attention Guidance in Social VR Group Conversations

In this paper, we present a novel multi-modal attention guidance method designed to address the challenges of turn-taking dynamics in meetings and enhance group conversations within virtual reality (VR) environments. Recognizing the difficulties posed by a confined field of view and the absence of d...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics. - 1996. - PP(2024) vom: 07. März
1. Verfasser: Lee, Geonsun (VerfasserIn)
Weitere Verfasser: Lee, Dae Yeol, Su, Guan-Ming, Manocha, Dinesh
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on visualization and computer graphics
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM369415353
003 DE-627
005 20240308233026.0
007 cr uuu---uuuuu
008 240308s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TVCG.2024.3372119  |2 doi 
028 5 2 |a pubmed24n1320.xml 
035 |a (DE-627)NLM369415353 
035 |a (NLM)38451772 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Lee, Geonsun  |e verfasserin  |4 aut 
245 1 0 |a "May I Speak?"  |b Multi-modal Attention Guidance in Social VR Group Conversations 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 08.03.2024 
500 |a published: Print-Electronic 
500 |a Citation Status Publisher 
520 |a In this paper, we present a novel multi-modal attention guidance method designed to address the challenges of turn-taking dynamics in meetings and enhance group conversations within virtual reality (VR) environments. Recognizing the difficulties posed by a confined field of view and the absence of detailed gesture tracking in VR, our proposed method aims to mitigate the challenges of noticing new speakers attempting to join the conversation. This approach tailors attention guidance, providing a nuanced experience for highly engaged participants while offering subtler cues for those less engaged, thereby enriching the overall meeting dynamics. Through group interview studies, we gathered insights to guide our design, resulting in a prototype that employs light as a diegetic guidance mechanism, complemented by spatial audio. The combination creates an intuitive and immersive meeting environment, effectively directing users' attention to new speakers. An evaluation study, comparing our method to state-of-the-art attention guidance approaches, demonstrated significantly faster response times (p < 0.001), heightened perceived conversation satisfaction (p < 0.001), and preference (p < 0.001) for our method. Our findings contribute to the understanding of design implications for VR social attention guidance, opening avenues for future research and development 
650 4 |a Journal Article 
700 1 |a Lee, Dae Yeol  |e verfasserin  |4 aut 
700 1 |a Su, Guan-Ming  |e verfasserin  |4 aut 
700 1 |a Manocha, Dinesh  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on visualization and computer graphics  |d 1996  |g PP(2024) vom: 07. März  |w (DE-627)NLM098269445  |x 1941-0506  |7 nnns 
773 1 8 |g volume:PP  |g year:2024  |g day:07  |g month:03 
856 4 0 |u http://dx.doi.org/10.1109/TVCG.2024.3372119  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d PP  |j 2024  |b 07  |c 03