Real-time tracking of visually attended objects in virtual environments and its application to LOD

This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts i...

Description complète

Détails bibliographiques
Publié dans:IEEE transactions on visualization and computer graphics. - 1996. - 15(2009), 1 vom: 05. Jan., Seite 6-19
Auteur principal: Lee, Sungkil (Auteur)
Autres auteurs: Kim, Gerard Jounghyun, Choi, Seungmoon
Format: Article en ligne
Langue:English
Publié: 2009
Accès à la collection:IEEE transactions on visualization and computer graphics
Sujets:Journal Article Research Support, Non-U.S. Gov't
Description
Résumé:This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking
Description:Date Completed 05.02.2009
Date Revised 14.11.2008
published: Print
Citation Status MEDLINE
ISSN:1941-0506
DOI:10.1109/TVCG.2008.82