37-Issue 6
Permanent URI for this collection
Browse
Browsing 37-Issue 6 by Issue Date
Now showing 1 - 20 of 27
Results Per Page
Sort Options
Item A Survey of Surface‐Based Illustrative Rendering for Visualization(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Lawonn, Kai; Viola, Ivan; Preim, Bernhard; Isenberg, Tobias; Chen, Min and Benes, BedrichIn this paper, we survey illustrative rendering techniques for 3D surface models. We first discuss the field of illustrative visualization in general and provide a new definition for this sub‐area of visualization. For the remainder of the survey, we then focus on surface‐based models. We start by briefly summarizing the differential geometry fundamental to many approaches and discuss additional general requirements for the underlying models and the methods' implementations. We then provide an overview of low‐level illustrative rendering techniques including sparse lines, stippling and hatching, and illustrative shading, connecting each of them to practical examples of visualization applications. We also mention evaluation approaches and list various application fields, before we close with a discussion of the state of the art and future work.In this paper, we survey illustrative rendering techniques for 3D surface models. We first discuss the field of illustrative visualization in general and provide a new definition for this sub‐area of visualization. For the remainder of the survey, we then focus on surface‐based models. We start by briefly summarizing the differential geometry fundamental to many approaches and discuss additional general requirements for the underlying models and the methods' implementations. We then provide an overview of low‐level illustrative rendering techniques including sparse lines, stippling and hatching, and illustrative shading, connecting each of them to practical examples of visualization applications. We also mention evaluation approaches and list various application fields, before we close with a discussion of the state of the art and future work.Item PencilArt: A Chromatic Penciling Style Generation Framework(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Gao, Chengying; Tang, Mengyue; Liang, Xiangguo; Su, Zhuo; Zou, Changqing; Chen, Min and Benes, BedrichNon‐photorealistic rendering has been an active area of research for decades whereas few of them concentrate on rendering chromatic penciling style. In this paper, we present a framework named as PencilArt for the chromatic penciling style generation from wild photographs. The structural outline and textured map for composing the chromatic pencil drawing are generated, respectively. First, we take advantage of deep neural network to produce the structural outline with proper intensity variation and conciseness. Next, for the textured map, we follow the painting process of artists to adjust the tone of input images to match the luminance histogram and pencil textures of real drawings. Eventually, we evaluate PencilArt via a series of comparisons to previous work, showing that our results better capture the main features of real chromatic pencil drawings and have an improved visual appearance.Non‐photorealistic rendering has been an active area of research for decades whereas few of them concentrate on rendering chromatic penciling style. In this paper, we present a framework named as PencilArt for the chromatic penciling style generation from wild photographs. The structural outline and textured map for composing the chromatic pencil drawing are generated, respectively. First, we take advantage of deep neural network to produce the structural outline with proper intensity variation and conciseness. Next, for the textured map, we follow the painting process of artists to adjust the tone of input images to match the luminance histogram and pencil textures of real drawings. Eventually, we evaluate PencilArt via a series of comparisons to previous work, showing that our results better capture the main features of real chromatic pencil drawings and have an improved visual appearance.Item Vector Field Map Representation for Near Conformal Surface Correspondence(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Wang, Y.; Liu, B.; Zhou, K.; Tong, Y.; Chen, Min and Benes, BedrichBased on a new spectral vector field analysis on triangle meshes, we construct a compact representation for near conformal mesh surface correspondences. Generalizing the functional map representation, our representation uses the map between the low‐frequency tangent vector fields induced by the correspondence. While our representation is as efficient, it is also capable of handling a more generic class of correspondence inference. We also formulate the vector field preservation constraints and regularization terms for correspondence inference, with function preservation treated as a special case. A number of important vector field–related constraints can be implicitly enforced in our representation, including the commutativity of the mapping with the usual gradient, curl, divergence operators or angle preservation under near conformal correspondence. For function transfer between shapes, the preservation of function values on landmarks can be strictly enforced through our gradient domain representation, enabling transfer across different topologies. With the vector field map representation, a novel class of constraints can be specified for the alignment of designed or computed vector field pairs. We demonstrate the advantages of the vector field map representation in tests on conformal datasets and near‐isometric datasets.Based on a new spectral vector field analysis on triangle meshes, we construct a compact representation for near conformal mesh surface correspondences. Generalizing the functional map representation, our representation uses the map between the low‐frequency tangent vector fields induced by the correspondence. While our representation is as efficient, it is also capable of handling a more generic class of correspondence inference. We also formulate the vector field preservation constraints and regularization terms for correspondence inference, with function preservation treated as a special case. A number of important vector field–related constraints can be implicitly enforced in our representation, including the commutativity of the mapping with the usual gradient, curl, divergence operators or angle preservation under near conformal correspondence.Item Data‐Driven Crowd Motion Control With Multi‐Touch Gestures(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Shen, Yijun; Henry, Joseph; Wang, He; Ho, Edmond S. L.; Komura, Taku; Shum, Hubert P. H.; Chen, Min and Benes, BedrichControlling a crowd using multi‐touch devices appeals to the computer games and animation industries, as such devices provide a high‐dimensional control signal that can effectively define the crowd formation and movement. However, existing works relying on pre‐defined control schemes require the users to learn a scheme that may not be intuitive. We propose a data‐driven gesture‐based crowd control system, in which the control scheme is learned from example gestures provided by different users. In particular, we build a database with pairwise samples of gestures and crowd motions. To effectively generalize the gesture style of different users, such as the use of different numbers of fingers, we propose a set of gesture features for representing a set of hand gesture trajectories. Similarly, to represent crowd motion trajectories of different numbers of characters over time, we propose a set of crowd motion features that are extracted from a Gaussian mixture model. Given a run‐time gesture, our system extracts the nearest gestures from the database and interpolates the corresponding crowd motions in order to generate the run‐time control. Our system is accurate and efficient, making it suitable for real‐time applications such as real‐time strategy games and interactive animation controls.Item Temporally Consistent Motion Segmentation From RGB‐D Video(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Bertholet, P.; Ichim, A.E.; Zwicker, M.; Chen, Min and Benes, BedrichTemporally consistent motion segmentation from RGB‐D videos is challenging because of the limitations of current RGB‐D sensors. We formulate segmentation as a motion assignment problem, where a motion is a sequence of rigid transformations through all frames of the input. We capture the quality of each potential assignment by defining an appropriate energy function that accounts for occlusions and a sensor‐specific noise model. To make energy minimization tractable, we work with a discrete set instead of the continuous, high dimensional space of motions, where the discrete motion set provides an upper bound for the original energy. We repeatedly minimize our energy, and in each step extend and refine the motion set to further lower the bound. A quantitative comparison to the current state of the art demonstrates the benefits of our approach in difficult scenarios.Temporally consistent motion segmentation from RGB‐D videos is challenging because of the limitations of current RGB‐D sensors. We formulate segmentation as a motion assignment problem, where a motion is a sequence of rigid transformations through all frames of the input. We capture the quality of each potential assignment by defining an appropriate energy function that accounts for occlusions and a sensor‐specific noise model. To make energy minimization tractable, we work with a discrete set instead of the continuous, high dimensional space of motions, where the discrete motion set provides an upper bound for the original energy. We repeatedly minimize our energy, and in each step extend and refine the motion set to further lower the bound. A quantitative comparison to the current state of the art demonstrates the benefits of our approach in difficult scenarios.Item Laplace–Beltrami Operator on Point Clouds Based on Anisotropic Voronoi Diagram(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Qin, Hongxing; Chen, Yi; Wang, Yunhai; Hong, Xiaoyang; Yin, Kangkang; Huang, Hui; Chen, Min and Benes, BedrichThe symmetrizable and converged Laplace–Beltrami operator () is an indispensable tool for spectral geometrical analysis of point clouds. The , introduced by Liu et al. [LPG12] is guaranteed to be symmetrizable, but its convergence degrades when it is applied to models with sharp features. In this paper, we propose a novel , which is not only symmetrizable but also can handle the point‐sampled surface containing significant sharp features. By constructing the anisotropic Voronoi diagram in the local tangential space, the can be well constructed for any given point. To compute the area of anisotropic Voronoi cell, we introduce an efficient approximation by projecting the cell to the local tangent plane and have proved its convergence. We present numerical experiments that clearly demonstrate the robustness and efficiency of the proposed for point clouds that may contain noise, outliers, and non‐uniformities in thickness and spacing. Moreover, we can show that its spectrum is more accurate than the ones from existing for scan points or surfaces with sharp features.The symmetrizable and converged Laplace–Beltrami operator () is an indispensable tool for spectral geometrical analysis of point clouds. The , introduced by Liu et al. [LPG12] is guaranteed to be symmetrizable, but its convergence degrades when it is applied to models with sharp features. In this paper, we propose a novel , which is not only symmetrizable but also can handle the point‐sampled surface containing significant sharp features. By constructing the anisotropic Voronoi diagram in the local tangential space, the can be well constructed for any given point. To compute the area of anisotropic Voronoi cell, we introduce an efficient approximation by projecting the cell to the local tangent plane and have proved its convergence. We present numerical experiments that clearly demonstrate the robustness and efficiency of the proposed for point clouds that may contain noise, outliers, and non‐uniformities in thickness and spacing.Item Issue Information(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Chen, Min and Benes, BedrichItem Sketching in Gestalt Space: Interactive Shape Abstraction through Perceptual Reasoning(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Kratt, J.; Niese, T.; Hu, R.; Huang, H.; Pirk, S.; Sharf, A.; Cohen‐Or, D.; Deussen, O.; Chen, Min and Benes, BedrichWe present an interactive method that allows users to easily abstract complex 3D models with only a few strokes. The key idea is to employ well‐known Gestalt principles to help generalizing user inputs into a full model abstraction while accounting for form, perceptual patterns and semantics of the model. Using these principles, we alleviate the user's need to explicitly define shape abstractions. We utilize structural characteristics such as repetitions, regularity and similarity to transform user strokes into full 3D abstractions. As the user sketches over shape elements, we identify Gestalt groups and later abstract them to maintain their structural meaning. Unlike previous approaches, we operate directly on the geometric elements, in a sense applying Gestalt principles in 3D. We demonstrate the effectiveness of our approach with a series of experiments, including a variety of complex models and two extensive user studies to evaluate our framework.We present an interactive method that allows users to easily abstract complex 3D models with only a few strokes. The key idea is to employ well‐known Gestalt principles to help generalizing user inputs into a full model abstraction while accounting for form, perceptual patterns and semantics of the model. Using these principles, we alleviate the user's need to explicitly define shape abstractions. We utilize structural characteristics such as repetitions, regularity and similarity to transform user strokes into full 3D abstractions. As the user sketches over shape elements, we identify Gestalt groups and later abstract them to maintain their structural meaning. Unlike previous approaches, we operate directly on the geometric elements, in a sense applying Gestalt principles in 3D. We demonstrate the effectiveness of our approach with a series of experiments, including a variety of complex models and two extensive user studies to evaluate our framework.Item The State of the Art in Vortex Extraction(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Günther, Tobias; Theisel, Holger; Chen, Min and Benes, BedrichVortices are commonly understood as rotating motions in fluid flows. The analysis of vortices plays an important role in numerous scientific applications, such as in engineering, meteorology, oceanology, medicine and many more. The successful analysis consists of three steps: vortex definition, extraction and visualization. All three have a long history, and the early themes and topics from the 1970s survived to this day, namely, the identification of vortex cores, their extent and the choice of suitable reference frames. This paper provides an overview over the advances that have been made in the last 40 years. We provide sufficient background on differential vector field calculus, extraction techniques like critical point search and the parallel vectors operator, and we introduce the notion of reference frame invariance. We explain the most important region‐based and line‐based methods, integration‐based and geometry‐based approaches, recent objective techniques, the selection of reference frames by means of flow decompositions, as well as a recent local optimization‐based technique. We point out relationships between the various approaches, classify the literature and identify open problems and challenges for future work.Vortices are commonly understood as rotating motions in fluid flows. The analysis of vortices plays an important role in numerous scientific applications, such as in engineering, meteorology, oceanology, medicine and many more. The successful analysis consists of three steps: vortex definition, extraction and visualization. All three have a long history, and the early themes and topics from the 1970s survived to this day, namely, the identification of vortex cores, their extent and the choice of suitable reference frames.Item Reproducing Spectral Reflectances From Tristimulus Colours(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Otsu, H.; Yamamoto, M.; Hachisuka, T.; Chen, Min and Benes, BedrichPhysically based rendering systems often support spectral rendering to simulate light transport in the real world. Material representations in such simulations need to be defined as spectral distributions. Since commonly available material data are in tristimulus colours, we ideally would like to obtain spectral distributions from tristimulus colours as an input to spectral rendering systems. Reproduction of spectral distributions given tristimulus colours, however, has been considered an ill‐posed problem since single tristimulus colour corresponds to a set of different spectra due to metamerism. We show how to resolve this problem using a data‐driven approach based on measured spectra and propose a practical algorithm that can faithfully reproduce a corresponding spectrum only from the given tristimulus colour. The key observation in colour science is that a natural measured spectrum is usually well approximated by a weighted sum of a few basis functions. We show how to reformulate conversion of tristimulus colours to spectra via principal component analysis. To improve accuracy of conversion, we propose a greedy clustering algorithm which minimizes reconstruction error. Using pre‐computation, the runtime computation is just a single matrix multiplication with an input tristimulus colour. Numerical experiments show that our method well reproduces the reference measured spectra using only the tristimulus colours as input.Physically based rendering systems often support spectral rendering to simulate light transport in the real world. Material representations in such simulations need to be defined as spectral distributions. Since commonly available material data are in tristimulus colours, we ideally would like to obtain spectral distributions from tristimulus colours as an input to spectral rendering systems. Reproduction of spectral distributions given tristimulus colours, however, has been considered an ill‐posed problem since single tristimulus colour corresponds to a set of different spectra due to metamerism. We show how to resolve this problem using a data‐driven approach based on measured spectra and propose a practical algorithm that can faithfully reproduce a corresponding spectrum only from the given tristimulus colour.Item On‐The‐Fly Tracking of Flame Surfaces for the Visual Analysis of Combustion Processes(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Oster, T.; Abdelsamie, A.; Motejat, M.; Gerrits, T.; Rössl, C.; Thévenin, D.; Theisel, H.; Chen, Min and Benes, BedrichThe visual analysis of combustion processes is one of the challenges of modern flow visualization. In turbulent combustion research, the behaviour of the flame surface contains important information about the interactions between turbulence and chemistry. The extraction and tracking of this surface is crucial for understanding combustion processes. This is impossible to realize as a post‐process because of the size of the involved datasets, which are too large to be stored on disk. We present an on‐the‐fly method for tracking the flame surface directly during simulation and computing the local tangential surface deformation for arbitrary time intervals. In a massively parallel simulation, the data are distributed over many processes and only a single time step is in memory at any time. To satisfy the demands on parallelism and accuracy posed by this situation, we track the surface with independent micro‐patches and adapt their distribution as needed to maintain numerical stability. With our method, we enable combustion researchers to observe the detailed movement and deformation of the flame surface over extended periods of time and thus gain novel insights into the mechanisms of turbulence–chemistry interactions. We validate our method on analytic ground truth data and show its applicability on two real‐world simulations.The visual analysis of combustion processes is one of the challenges of modern flow visualization. processes is one of the challenges of modern flow visualization. In turbulent combustion research, the behaviour of the flame surface contains important information about the interactions between turbulence and chemistry. The extraction and tracking of this surface is crucial for understanding combustion processes. This is impossible to realize as a post‐process because of the size of the involved datasets, which are too large to be stored on disk. We present an on‐the‐fly method for tracking the flame surface directly during simulation and computing the local tangential surface deformation for arbitrary time intervals. In a massively parallel simulation, the data are distributed over many processes and only a single time step is in memory at any time. To satisfy the demands on parallelism and accuracy posed by this situation, we track the surface with independent micro‐patches and adapt their distribution as needed to maintain numerical stability.Item Quantitative and Qualitative Analysis of the Perception of Semi‐Transparent Structures in Direct Volume Rendering(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Englund, R.; Ropinski, T.; Chen, Min and Benes, BedrichDirect Volume Rendering (DVR) provides the possibility to visualize volumetric data sets as they occur in many scientific disciplines. With DVR semi‐transparency is facilitated to convey the complexity of the data. Unfortunately, semi‐transparency introduces challenges in spatial comprehension of the data, as the ambiguities inherent to semi‐transparent representations affect spatial comprehension. Accordingly, many techniques have been introduced to enhance the spatial comprehension of DVR images. In this paper, we present our findings obtained from two evaluations investigating the perception of semi‐transparent structures from volume rendered images. We have conducted a user evaluation in which we have compared standard DVR with five techniques previously proposed to enhance the spatial comprehension of DVR images. In this study, we investigated the perceptual performance of these techniques and have compared them against each other in a large‐scale quantitative user study with 300 participants. Each participant completed micro‐tasks designed such that the aggregated feedback gives insight on how well these techniques aid the user to perceive depth and shape of objects. To further clarify the findings, we conducted a qualitative evaluation in which we interviewed three experienced visualization researchers, in order to find out if we can identify the benefits and shortcomings of the individual techniques.Direct Volume Rendering (DVR) provides the possibility to visualize volumetric data sets as they occur in many scientific disciplines. With DVR semi‐transparency is facilitated to convey the complexity of the data. Unfortunately, semi‐transparency introduces challenges in spatial comprehension of the data, as the ambiguities inherent to semi‐transparent representations affect spatial comprehension. Accordingly, many techniques have been introduced to enhance the spatial comprehension of DVR images. In this paper, we present our findings obtained from two evaluations investigating the perception of semi‐transparent structures from volume rendered images. We have conducted a user evaluation in which we have compared standard DVR with five techniques previously proposed to enhance the spatial comprehension of DVR images.Item Part‐Based Mesh Segmentation: A Survey(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Rodrigues, Rui S. V.; Morgado, José F. M.; Gomes, Abel J. P.; Chen, Min and Benes, BedrichThis paper surveys mesh segmentation techniques and algorithms, with a focus on part‐based segmentation, that is, segmentation that divides a mesh (featuring a 3D object) into meaningful parts. Part‐based segmentation applies to a single object and also to a family of objects (i.e. co‐segmentation). However, we shall not address here chart‐based segmentation, though some mesh co‐segmentation methods employ such chart‐based segmentation in the initial step of their pipeline. Finally, the taxonomy proposed in this paper is new in the sense that one classifies each segmentation algorithm regarding the dimension (i.e. 1D, 2D and 3D) of the representation of object parts. The leading idea behind this survey is to identify the properties and limitations of the state‐of‐the‐art algorithms to shed light on the challenges for future work.This paper surveys mesh segmentation techniques and algorithms, with a focus on part‐based segmentation, that is, segmentation that divides a mesh (featuring a 3D object) into meaningful parts. Part‐based segmentation applies to a single object and also to a family of objects (i.e. co‐segmentation). However, we shall not address here chart‐based segmentation, though some mesh co‐segmentation methods employ such chart‐based segmentation in the initial step of their pipeline. Finally, the taxonomy proposed in this paper is new in the sense that one classifies each segmentation algorithm regarding the dimension (i.e. 1D, 2D and 3D) of the representation of object parts. The leading idea behind this survey is to identify the properties and limitations of the state‐of‐the‐art algorithms to shed light on the challenges for future work.Item State of the Art on Stylized Fabrication(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Bickel, Bernd; Cignoni, Paolo; Malomo, Luigi; Pietroni, Nico; Chen, Min and Benes, BedrichDigital fabrication devices are powerful tools for creating tangible reproductions of 3D digital models. Most available printing technologies aim at producing an accurate copy of a tridimensional shape. However, fabrication technologies can also be used to create a stylistic representation of a digital shape. We refer to this class of methods as ‘stylized fabrication methods’. These methods abstract geometric and physical features of a given shape to create an unconventional representation, to produce an optical illusion or to devise a particular interaction with the fabricated model. In this state‐of‐the‐art report, we classify and overview this broad and emerging class of approaches and also propose possible directions for future research.Digital fabrication devices are powerful tools for creating tangible reproductions of 3D digital models. Most available printing technologies aim at producing an accurate copy of a tridimensional shape. However, fabrication technologies can also be used to create a stylistic representation of a digital shape. We refer to this class of methods as ‘stylized fabrication methods’. These methods abstract geometric and physical features of a given shape to create an unconventional representation, to produce an optical illusion or to devise a particular interaction with the fabricated model. In this state‐of‐the‐art report, we classify and overview this broad and emerging class of approaches and also propose possible directions for future research.Item Visually Supporting Multiple Needle Placement in Irreversible Electroporation Interventions(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Kreiser, J.; Freedman, J.; Ropinski, T.; Chen, Min and Benes, BedrichIrreversible electroporation (IRE) is a minimally invasive technique for small tumour ablation. Multiple needles are inserted around the planned treatment zone and, depending on the size, inside as well. An applied electric field triggers instant cell death around this zone. To ensure the correct application of IRE, certain criteria need to be fulfilled. The needles' placement in the tissue has to be parallel, at the same depth, and in a pattern which allows the electric field to effectively destroy the targeted lesions. As multiple needles need to synchronously fulfill these criteria, it is challenging for the surgeon to perform a successful IRE. Therefore, we propose a visualization which exploits intuitive visual coding to support the surgeon when conducting IREs. We consider two scenarios: first, to monitor IRE parameters while inserting needles during laparoscopic surgery; second, to validate IRE parameters in post‐placement scenarios using computed tomography. With the help of an easy to comprehend and lightweight visualization, surgeons are enabled to quickly visually detect what needs to be adjusted. We have evaluated our visualization together with surgeons to investigate the practical use for IRE liver ablations. A quantitative study shows the effectiveness compared to a single 3D view placement method.Irreversible electroporation (IRE) is a minimally invasive technique for small tumour ablation. Multiple needles are inserted around the planned treatment zone and, depending on the size, inside as well. An applied electric field triggers instant cell death around this zone. To ensure the correct application of IRE, certain criteria need to be fulfilled. The needles' placement in the tissue has to be parallel, at the same depth, and in a pattern which allows the electric field to effectively destroy the targeted lesions. As multiple needles need to synchronously fulfill these criteria, it is challenging for the surgeon to perform a successful IRE. Therefore, we propose a visualization which exploits intuitive visual coding to support the surgeon when conducting IREs. We consider two scenarios: first, to monitor IRE parameters while inserting needles during laparoscopic surgery; second, to validate IRE parameters in post‐placement scenarios using computed tomography. With the help of an easy to comprehend and lightweight visualization, surgeons are enabled to quickly visually detect what needs to be adjusted.Item A New Class of Guided C2 Subdivision Surfaces Combining Good Shape with Nested Refinement(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Karčiauskas, Kęstutis; Peters, Jörg; Chen, Min and Benes, BedrichConverting quadrilateral meshes to smooth manifolds, guided subdivision offers a way to combine the good highlight line distribution of recent G‐spline constructions with the refinability of subdivision surfaces. This avoids the complex refinement of G‐spline constructions and the poor shape of standard subdivision. Guided subdivision can then be used both to generate the surface and hierarchically compute functions on the surface. Specifically, we present a subdivision algorithm of polynomial degree bi‐6 and a curvature bounded algorithm of degree bi‐5. We prove that the common eigenstructure of this class of subdivision algorithms is determined by their guide and demonstrate that their eigenspectrum (speed of contraction) can be adjusted without harming the shape. For practical implementation, a finite number of subdivision steps can be completed by a high‐quality cap. Near irregular points this allows leveraging standard polynomial tools both for rendering of the surface and for approximately integrating functions on the surface.Converting quadrilateral meshes to smooth manifolds, guided subdivision offers a way to combine the good highlight line distribution of recent G‐spline constructions with the refinability of subdivision surfaces.This avoids the complex refinement of G‐spline constructions and the poor shape of standard subdivision. Guided subdivision can then be used both to generate the surface and hierarchically compute functions on the surface. Specifically, we present a subdivision algorithm of polynomial degree bi‐6 and a curvature bounded algorithm of degree bi‐5.Item Re‐Weighting Firefly Samples for Improved Finite‐Sample Monte Carlo Estimates(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Zirr, Tobias; Hanika, Johannes; Dachsbacher, Carsten; Chen, Min and Benes, BedrichSamples with high contribution but low probability density, often called fireflies, occur in all practical Monte Carlo estimators and are part of computing unbiased estimates. For finite‐sample estimates, however, they can lead to excessive variance. Rejecting all samples classified as outliers, as suggested in previous work, leads to estimates that are too low and can cause undesirable artefacts. In this paper, we show how samples can be re‐weighted depending on their contribution and sampling frequency such that the finite‐sample estimate gets closer to the correct expected value and the variance can be controlled. For this, we first derive a theory for how samples should ideally be re‐weighted and that this would require the probability density function of the optimal sampling strategy. As this probability density function is generally unknown, we show how the discrepancy between the optimal and the actual sampling strategy can be estimated and used for re‐weighting in practice. We describe an efficient algorithm that allows for the necessary analysis of per‐pixel sample distributions in the context of Monte Carlo rendering without storing any individual samples, with only minimal changes to the rendering algorithm. It causes negligible runtime overhead, works in constant memory and is well suited for parallel and progressive rendering. The re‐weighting runs as a fast post‐process, can be controlled interactively and our approach is non‐destructive in that the unbiased result can be reconstructed at any time.Samples with high contribution but low probability density, often called fireflies, occur in all practical Monte Carlo estimators and are part of computing unbiased estimates. For finite‐sample estimates, however, they can lead to excessive variance. Rejecting all samples classified as outliers, as suggested in previous work, leads to estimates that are too low and can cause undesirable artefacts. In this paper, we show how samples can be re‐weighted depending on their contribution and sampling frequency such that the finite‐sample estimate gets closer to the correct expected value and the variance can be controlled. For this, we first derive a theory for how samples should ideally be re‐weighted and that this would require the probability density function of the optimal sampling strategy. As this probability density function is generally unknown, we show how the discrepancy between the optimal and the actual sampling strategy can be estimated and used for re‐weighting in practice. We describe an efficient algorithm that allows for the necessary analysis of per‐pixel sample distributions in the context of Monte Carlo rendering without storing any individual samples, with only minimal changes to the rendering algorithm.Item Bidirectional Rendering of Vector Light Transport(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Jarabo, Adrian; Arellano, Victor; Chen, Min and Benes, BedrichOn the foundations of many rendering algorithms it is the symmetry between the path traversed by light and its adjoint path starting from the camera. However, several effects, including polarization or fluorescence, break that symmetry, and are defined only on the direction of light propagation. This reduces the applicability of bidirectional methods that exploit this symmetry for simulating effectively light transport. In this work, we focus on how to include these non‐symmetric effects within a bidirectional rendering algorithm. We generalize the path integral to support the constraints imposed by non‐symmetric light transport. Based on this theoretical framework, we propose modifications on two bidirectional methods, namely bidirectional path tracing and photon mapping, extending them to support polarization and fluorescence, in both steady and transient state. On the foundations of many rendering algorithms, it is the symmetry between the path traversed by light and its adjoint path starting from the camera. However, several effects, including polarization or fluorescence, break that symmetry, and are defined only on the direction of light. This reduces the applicability of bidirectional methods that exploit this symmetry for simulating effectively light transport. In this work, we focus on how to include these non‐symmetric effects within a bidirectional rendering algorithm.Item A Study of the Effect of Doughnut Chart Parameters on Proportion Estimation Accuracy(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Cai, X.; Efstathiou, K.; Xie, X.; Wu, Y.; Shi, Y.; Yu, L.; Chen, Min and Benes, BedrichPie and doughnut charts nicely convey the part–whole relationship and they have become the most recognizable chart types for representing proportions in business and data statistics. Many experiments have been carried out to study human perception of the pie chart, while the corresponding aspects of the doughnut chart have seldom been tested, even though the doughnut chart and the pie chart share several similarities. In this paper, we report on a series of experiments in which we explored the effect of a few fundamental design parameters of doughnut charts, and additional visual cues, on the accuracy of such charts for proportion estimates. Since mobile devices are becoming the primary devices for casual reading, we performed all our experiments on such device. Moreover, the screen size of mobile devices is limited and it is therefore important to know how such size constraint affects the proportion accuracy. For this reason, in our first experiment we tested the chart size and we found that it has no significant effect on proportion accuracy. In our second experiment, we focused on the effect of the doughnut chart inner radius and we found that the proportion accuracy is insensitive to the inner radius, except the case of the thinnest doughnut chart. In the third experiment, we studied the effect of visual cues and found that marking the centre of the doughnut chart or adding tick marks at 25% intervals improves the proportion accuracy. Based on the results of the three experiments, we discuss the design of doughnut charts and offer suggestions for improving the accuracy of proportion estimates.Pie and doughnut charts nicely convey the part–whole relationship and they have become the most recognizable chart types for representing proportions in business and data statistics. Many experiments have been carried out to study human perception of the pie chart, while the corresponding aspects of the doughnut chart have seldom been tested, even though the doughnut chart and the pie chart share several similarities. In this paper, we report on a series of experiments in which we explored the effect of a few fundamental design parameters of doughnut charts, and additional visual cues, on the accuracy of such charts for proportion estimates. Since mobile devices are becoming the primary devices for casual reading, we performed all our experiments on such device. Moreover, the screen size of mobile devices is limited and it is therefore important to know how such size constraint affects the proportion accuracy. For this reason, in our first experiment we tested the chart size and we found that it has no significant effect on proportion accuracy.Item Data Reduction Techniques for Simulation, Visualization and Data Analysis(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Li, S.; Marsaglia, N.; Garth, C.; Woodring, J.; Clyne, J.; Childs, H.; Chen, Min and Benes, BedrichData reduction is increasingly being applied to scientific data for numerical simulations, scientific visualizations and data analyses. It is most often used to lower I/O and storage costs, and sometimes to lower in‐memory data size as well. With this paper, we consider five categories of data reduction techniques based on their information loss: (1) truly lossless, (2) near lossless, (3) lossy, (4) mesh reduction and (5) derived representations. We then survey available techniques in each of these categories, summarize their properties from a practical point of view and discuss relative merits within a category. We believe, in total, this work will enable simulation scientists and visualization/data analysis scientists to decide which data reduction techniques will be most helpful for their needs.Data reduction is increasingly being applied to scientific data for numerical simulations, scientific visualizations and data analyses. It is most often used to lower I/O and storage costs, and sometimes to lower in‐memory data size as well. With this paper, we consider five categories of data reduction techniques based on their information loss: (1) truly lossless, (2) near lossless, (3) lossy, (4) mesh reduction and (5) derived representations. We then survey available techniques in each of these categories, summarize their properties from a practical point of view and discuss relative merits within a category. We believe, in total, this work will enable simulation scientists and visualization/data analysis scientists to decide which data reduction techniques will be most helpful for their needs.