Browsing by Author "Eisemann, Elmar"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item InkVis: A High-Particle-Count Approach for Visualization of Phase-Contrast Magnetic Resonance Imaging Data(The Eurographics Association, 2019) de Hoon, Niels; Lawonn, Kai; Jalba, Andrei; Eisemann, Elmar; Vilanova, Anna; KozlÃková, Barbora and Linsen, Lars and Vázquez, Pere-Pau and Lawonn, Kai and Raidou, Renata GeorgiaPhase-Contrast Magnetic Resonance Imaging (PC-MRI) measures volumetric and time-varying blood flow data, unsurpassed in quality and completeness. Such blood-flow data have been shown to have the potential to improve both diagnosis and risk assessment of cardiovascular diseases (CVDs) uniquely. Typically PC-MRI data is visualized using stream- or pathlines. However, time-varying aspects of the data, e.g., vortex shedding, breakdown, and formation, are not sufficiently captured by these visualization techniques. Experimental flow visualization techniques introduce a visible medium, like smoke or dye, to visualize flow aspects including time-varying aspects. We propose a framework that mimics such experimental techniques by using a high number of particles. The framework offers great flexibility which allows for various visualization approaches. These include common traditional flow visualizations, but also streak visualizations to show the temporal aspects, and uncertainty visualizations. Moreover, these patient-specific measurements suffer from noise artifacts and a coarse resolution, causing uncertainty. Traditional flow visualizations neglect uncertainty and, therefore, may give a false sense of certainty, which can mislead the user yielding incorrect decisions. Previously, the domain experts had no means to visualize the effect of the uncertainty in the data. Our framework has been adopted by domain experts to visualize the vortices present in the sinuses of the aorta root showing the potential of the framework. Furthermore, an evaluation among domain experts indicated that having the option to visualize the uncertainty contributed to their confidence on the analysis.Item Interactions for Seamlessly Coupled Exploration of High-Dimensional Images and Hierarchical Embeddings(The Eurographics Association, 2023) Vieth, Alexander; Lelieveldt, Boudewijn; Eisemann, Elmar; Vilanova, Anna; Höllt, Thomas; Guthe, Michael; Grosch, ThorstenHigh-dimensional images (i.e., with many attributes per pixel) are commonly acquired in many domains, such as geosciences or systems biology. The spatial and attribute information of such data are typically explored separately, e.g., by using coordinated views of an image representation and a low-dimensional embedding of the high-dimensional attribute data. Facing ever growing image data sets, hierarchical dimensionality reduction techniques lend themselves to overcome scalability issues. However, current embedding methods do not provide suitable interactions to reflect image space exploration. Specifically, it is not possible to adjust the level of detail in the embedding hierarchy to reflect changing level of detail in image space stemming from navigation such as zooming and panning. In this paper, we propose such a mapping from image navigation interactions to embedding space adjustments. We show how our mapping applies the "overview first, details-on-demand" characteristic inherent to image exploration in the high-dimensional attribute space. We compare our strategy with regular hierarchical embedding technique interactions and demonstrate the advantages of linking image and embedding interactions through a representative use case.Item Interactively Modifying Compressed Sparse Voxel Representations(The Eurographics Association and John Wiley & Sons Ltd., 2020) Careil, Victor; Billeter, Markus; Eisemann, Elmar; Panozzo, Daniele and Assarsson, UlfVoxels are a popular choice to encode complex geometry. Their regularity makes updates easy and enables random retrieval of values. The main limitation lies in the poor scaling with respect to resolution. Sparse voxel DAGs (Directed Acyclic Graphs) overcome this hurdle and offer high-resolution representations for real-time rendering but only handle static data. We introduce a novel data structure to enable interactive modifications of such compressed voxel geometry without requiring de- and recompression. Besides binary data to encode geometry, it also supports compressed attributes (e.g., color). We illustrate the usefulness of our representation via an interactive large-scale voxel editor (supporting carving, filling, copying, and painting).Item Next Event Estimation++: Visibility Mapping for Efficient Light Transport Simulation(The Eurographics Association and John Wiley & Sons Ltd., 2020) Guo, Jerry Jinfeng; Eisemann, Martin; Eisemann, Elmar; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueMonte-Carlo rendering requires determining the visibility between scene points as the most common and compute intense operation to establish paths between camera and light source. Unfortunately, many tests reveal occlusions and the corresponding paths do not contribute to the final image. In this work, we present next event estimation++ (NEE++): a visibility mapping technique to perform visibility tests in a more informed way by caching voxel to voxel visibility probabilities. We show two scenarios: Russian roulette style rejection of visibility tests and direct importance sampling of the visibility. We show applications to next event estimation and light sampling in a uni-directional path tracer, and light-subpath sampling in Bi-Directional Path Tracing. The technique is simple to implement, easy to add to existing rendering systems, and comes at almost no cost, as the required information can be directly extracted from the rendering process itself. It discards up to 80% of visibility tests on average, while reducing variance by ~20% compared to other state-of-the-art light sampling techniques with the same number of samples. It gracefully handles complex scenes with efficiency similar to Metropolis light transport techniques but with a more uniform convergence.Item Pacific Graphics 2020 - CGF 39-7: Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2020) Eisemann, Elmar; Jacobson, Alec; Zhang, Fang-Lue; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueItem Targeting Shape and Material in Lighting Design(The Eurographics Association and John Wiley & Sons Ltd., 2022) Usta, Baran; Pont, Sylvia; Eisemann, Elmar; Umetani, Nobuyuki; Wojtan, Chris; Vouga, EtienneProduct lighting design is a laborious and time-consuming task. With product illustrations being increasingly rendered, the lighting challenge transferred to the virtual realm. Our approach targets lighting design in the context of a scene with fixed objects, materials, and camera parameters, illuminated by environmental lighting. Our solution offers control over the depiction of material characteristics and shape details by optimizing the illuminating environment-map. To that end, we introduce a metric that assesses the shape and material cues in terms of the designed appearance. We formalize the process and support steering the outcome using additional design constraints. We illustrate our solution with several challenging examples.Item Texture Browser: Feature-based Texture Exploration(The Eurographics Association and John Wiley & Sons Ltd., 2021) Luo, Xuejiao; Scandolo, Leonardo; Eisemann, Elmar; Borgo, Rita and Marai, G. Elisabeta and Landesberger, Tatiana vonTexture is a key characteristic in the definition of the physical appearance of an object and a crucial element in the creation process of 3D artists. However, retrieving a texture that matches an intended look from an image collection is difficult. Contrary to most photo collections, for which object recognition has proven quite useful, syntactic descriptions of texture characteristics is not straightforward, and even creating appropriate metadata is a very difficult task. In this paper, we propose a system to help explore large unlabeled collections of texture images. The key insight is that spatially grouping textures sharing similar features can simplify navigation. Our system uses a pre-trained convolutional neural network to extract high-level semantic image features, which are then mapped to a 2-dimensional location using an adaptation of t-SNE, a dimensionality-reduction technique. We describe an interface to visualize and explore the resulting distribution and provide a series of enhanced navigation tools, our prioritized t-SNE, scalable clustering, and multi-resolution embedding, to further facilitate exploration and retrieval tasks. Finally, we also present the results of a user evaluation that demonstrates the effectiveness of our solution.