Browsing by Author "Ropinski, T."
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Learning Human Viewpoint Preferences from Sparsely Annotated Models(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Hartwig, S.; Schelling, M.; Onzenoodt, C. v.; Vázquez, P.‐P.; Hermosilla, P.; Ropinski, T.; Hauser, Helwig and Alliez, PierreView quality measures compute scores for given views and are used to determine an optimal view in viewpoint selection tasks. Unfortunately, despite the wide adoption of these measures, they are rather based on computational quantities, such as entropy, than human preferences. To instead tailor viewpoint measures towards humans, view quality measures need to be able to capture human viewpoint preferences. Therefore, we introduce a large‐scale crowdsourced data set, which contains 58 annotated viewpoints for 3220 ModelNet40 models. Based on this data, we derive a neural view quality measure abiding to human preferences. We further demonstrate that this view quality measure not only generalizes to models unseen during training, but also to unseen model categories. We are thus able to predict view qualities for single images, and directly predict human preferred viewpoints for 3D models by exploiting point‐based learning technology, without requiring to generate intermediate images or sampling the view sphere. We will detail our data collection procedure, describe the data analysis and model training and will evaluate the predictive quality of our trained viewpoint measure on unseen models and categories. To our knowledge, this is the first deep learning approach to predict a view quality measure solely based on human preferences.Item Quantitative and Qualitative Analysis of the Perception of Semi‐Transparent Structures in Direct Volume Rendering(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Englund, R.; Ropinski, T.; Chen, Min and Benes, BedrichDirect Volume Rendering (DVR) provides the possibility to visualize volumetric data sets as they occur in many scientific disciplines. With DVR semi‐transparency is facilitated to convey the complexity of the data. Unfortunately, semi‐transparency introduces challenges in spatial comprehension of the data, as the ambiguities inherent to semi‐transparent representations affect spatial comprehension. Accordingly, many techniques have been introduced to enhance the spatial comprehension of DVR images. In this paper, we present our findings obtained from two evaluations investigating the perception of semi‐transparent structures from volume rendered images. We have conducted a user evaluation in which we have compared standard DVR with five techniques previously proposed to enhance the spatial comprehension of DVR images. In this study, we investigated the perceptual performance of these techniques and have compared them against each other in a large‐scale quantitative user study with 300 participants. Each participant completed micro‐tasks designed such that the aggregated feedback gives insight on how well these techniques aid the user to perceive depth and shape of objects. To further clarify the findings, we conducted a qualitative evaluation in which we interviewed three experienced visualization researchers, in order to find out if we can identify the benefits and shortcomings of the individual techniques.Direct Volume Rendering (DVR) provides the possibility to visualize volumetric data sets as they occur in many scientific disciplines. With DVR semi‐transparency is facilitated to convey the complexity of the data. Unfortunately, semi‐transparency introduces challenges in spatial comprehension of the data, as the ambiguities inherent to semi‐transparent representations affect spatial comprehension. Accordingly, many techniques have been introduced to enhance the spatial comprehension of DVR images. In this paper, we present our findings obtained from two evaluations investigating the perception of semi‐transparent structures from volume rendered images. We have conducted a user evaluation in which we have compared standard DVR with five techniques previously proposed to enhance the spatial comprehension of DVR images.Item Visually Supporting Multiple Needle Placement in Irreversible Electroporation Interventions(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Kreiser, J.; Freedman, J.; Ropinski, T.; Chen, Min and Benes, BedrichIrreversible electroporation (IRE) is a minimally invasive technique for small tumour ablation. Multiple needles are inserted around the planned treatment zone and, depending on the size, inside as well. An applied electric field triggers instant cell death around this zone. To ensure the correct application of IRE, certain criteria need to be fulfilled. The needles' placement in the tissue has to be parallel, at the same depth, and in a pattern which allows the electric field to effectively destroy the targeted lesions. As multiple needles need to synchronously fulfill these criteria, it is challenging for the surgeon to perform a successful IRE. Therefore, we propose a visualization which exploits intuitive visual coding to support the surgeon when conducting IREs. We consider two scenarios: first, to monitor IRE parameters while inserting needles during laparoscopic surgery; second, to validate IRE parameters in post‐placement scenarios using computed tomography. With the help of an easy to comprehend and lightweight visualization, surgeons are enabled to quickly visually detect what needs to be adjusted. We have evaluated our visualization together with surgeons to investigate the practical use for IRE liver ablations. A quantitative study shows the effectiveness compared to a single 3D view placement method.Irreversible electroporation (IRE) is a minimally invasive technique for small tumour ablation. Multiple needles are inserted around the planned treatment zone and, depending on the size, inside as well. An applied electric field triggers instant cell death around this zone. To ensure the correct application of IRE, certain criteria need to be fulfilled. The needles' placement in the tissue has to be parallel, at the same depth, and in a pattern which allows the electric field to effectively destroy the targeted lesions. As multiple needles need to synchronously fulfill these criteria, it is challenging for the surgeon to perform a successful IRE. Therefore, we propose a visualization which exploits intuitive visual coding to support the surgeon when conducting IREs. We consider two scenarios: first, to monitor IRE parameters while inserting needles during laparoscopic surgery; second, to validate IRE parameters in post‐placement scenarios using computed tomography. With the help of an easy to comprehend and lightweight visualization, surgeons are enabled to quickly visually detect what needs to be adjusted.