|
|
|
|
LEADER |
01000caa a22002652 4500 |
001 |
NLM372419437 |
003 |
DE-627 |
005 |
20240626232410.0 |
007 |
cr uuu---uuuuu |
008 |
240517s2024 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TVCG.2024.3401755
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1452.xml
|
035 |
|
|
|a (DE-627)NLM372419437
|
035 |
|
|
|a (NLM)38753475
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Engel, Dominik
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Leveraging Self-Supervised Vision Transformers for Segmentation-based Transfer Function Design
|
264 |
|
1 |
|c 2024
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 25.06.2024
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status Publisher
|
520 |
|
|
|a In volume rendering, transfer functions are used to classify structures of interest, and to assign optical properties such as color and opacity. They are commonly defined as 1D or 2D functions that map simple features to these optical properties. As the process of designing a transfer function is typically tedious and unintuitive, several approaches have been proposed for their interactive specification. In this paper, we present a novel method to define transfer functions for volume rendering by leveraging the feature extraction capabilities of self-supervised pre-trained vision transformers. To design a transfer function, users simply select the structures of interest in a slice viewer, and our method automatically selects similar structures based on the high-level features extracted by the neural network. Contrary to previous learning-based transfer function approaches, our method does not require training of models and allows for quick inference, enabling an interactive exploration of the volume data. Our approach reduces the amount of necessary annotations by interactively informing the user about the current classification, so they can focus on annotating the structures of interest that still require annotation. In practice, this allows users to design transfer functions within seconds, instead of minutes. We compare our method to existing learning-based approaches in terms of annotation and compute time, as well as with respect to segmentation accuracy. Our accompanying video showcases the interactivity and effectiveness of our method
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Sick, Leon
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Ropinski, Timo
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on visualization and computer graphics
|d 1996
|g PP(2024) vom: 16. Mai
|w (DE-627)NLM098269445
|x 1941-0506
|7 nnns
|
773 |
1 |
8 |
|g volume:PP
|g year:2024
|g day:16
|g month:05
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TVCG.2024.3401755
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d PP
|j 2024
|b 16
|c 05
|