|
|
|
|
LEADER |
01000caa a22002652 4500 |
001 |
NLM363633510 |
003 |
DE-627 |
005 |
20240216232635.0 |
007 |
cr uuu---uuuuu |
008 |
231226s2023 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TVCG.2023.3326568
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1295.xml
|
035 |
|
|
|a (DE-627)NLM363633510
|
035 |
|
|
|a (NLM)37871050
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Chen, Zhutian
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a RL-LABEL
|b A Deep Reinforcement Learning Approach Intended for AR Label Placement in Dynamic Scenarios
|
264 |
|
1 |
|c 2023
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 16.02.2024
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status Publisher
|
520 |
|
|
|a Labels are widely used in augmented reality (AR) to display digital information. Ensuring the readability of AR labels requires placing them in an occlusion-free manner while keeping visual links legible, especially when multiple labels exist in the scene. Although existing optimization-based methods, such as force-based methods, are effective in managing AR labels in static scenarios, they often struggle in dynamic scenarios with constantly moving objects. This is due to their focus on generating layouts optimal for the current moment, neglecting future moments and leading to sub-optimal or unstable layouts over time. In this work, we present RL-LABEL, a deep reinforcement learning-based method intended for managing the placement of AR labels in scenarios involving moving objects. RL-LABEL considers both the current and predicted future states of objects and labels, such as positions and velocities, as well as the user's viewpoint, to make informed decisions about label placement. It balances the trade-offs between immediate and long-term objectives. We tested RL-LABEL in simulated AR scenarios on two real-world datasets, showing that it effectively learns the decision-making process for long-term optimization, outperforming two baselines (i.e., no view management and a force-based method) by minimizing label occlusions, line intersections, and label movement distance. Additionally, a user study involving 18 participants indicates that, within our simulated environment, RL-LABEL excels over the baselines in aiding users to identify, compare, and summarize data on labels in dynamic scenes
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Chiappalupi, Daniele
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Lin, Tica
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Yang, Yalong
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Beyer, Johanna
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Pfister, Hanspeter
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on visualization and computer graphics
|d 1996
|g PP(2023) vom: 23. Okt.
|w (DE-627)NLM098269445
|x 1941-0506
|7 nnns
|
773 |
1 |
8 |
|g volume:PP
|g year:2023
|g day:23
|g month:10
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TVCG.2023.3326568
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d PP
|j 2023
|b 23
|c 10
|