31-Issue 6
Permanent URI for this collection
Browse
Browsing 31-Issue 6 by Issue Date
Now showing 1 - 18 of 18
Results Per Page
Sort Options
Item Efficiently Simulating the Bokeh of Polygonal Apertures in a Post‐Process Depth of Field Shader(The Eurographics Association and Blackwell Publishing Ltd., 2012) McIntosh, L.; Riecke, B. E.; DiPaola, S.; Holly Rushmeier and Oliver DeussenThe effect of aperture shape on an image, known in photography as ‘bokeh’, is an important characteristic of depth of field in real‐world cameras. However, most real‐time depth of field techniques produce Gaussian bokeh rather than the circular or polygonal bokeh that is almost universal in real‐world cameras. ‘Scattering’ (i.e. point‐splatting) techniques provide a flexible way to model any aperture shape, but tend to have prohibitively slow performance, and require geometry‐shaders or significant engine changes to implement. This paper shows that simple post‐process ‘gathering’ depth of field shaders can be easily extended to simulate certain bokeh effects. Specifically we show that it is possible to efficiently model the bokeh effects of square, hexagonal and octagonal apertures using a novel separable filtering approach. Performance data from a video game engine test demonstrates that our shaders attain much better frame rates than a naive non‐separable approach.The effect of aperture shape on an image, known in photography as ‘bokeh’, is an important characteristic of depth of field in real‐world cameras. However, most real‐time depth of field techniques produce Gaussian bokeh rather than the circular or polygonal bokeh that is almost universal in real‐world cameras. ‘Scattering’ (i.e. point‐splatting) techniques provide a flexible way to model any aperture shape, but tend to have prohibitively slow performance, and require geometry‐shaders or significant engine changes to implement. This paper shows that simple post‐process ‘gathering’ depth of field shaders can be easily extended to simulate certain bokeh effects.Item 33rd EUROGRAPHICS General Assembly(The Eurographics Association and Blackwell Publishing Ltd., 2012) Holly Rushmeier and Oliver DeussenItem Cultural Heritage Predictive Rendering(The Eurographics Association and Blackwell Publishing Ltd., 2012) Happa, Jassim; Bashford-Rogers, Tom; Wilkie, Alexander; Artusi, Alessandro; Debattista, Kurt; Chalmers, Alan; Holly Rushmeier and Oliver DeussenHigh‐fidelity rendering can be used to investigate Cultural Heritage (CH) sites in a scientifically rigorous manner. However, a high degree of realism in the reconstruction of a CH site can be misleading insofar as it can be seen to imply a high degree of certainty about the displayed scene—which is frequently not the case, especially when investigating the past. So far, little effort has gone into adapting and formulating a Predictive Rendering pipeline for CH research applications. In this paper, we first discuss the goals and the workflow of CH reconstructions in general, as well as those of traditional Predictive Rendering. Based on this, we then propose a research framework for CH research, which we refer to as ‘Cultural Heritage Predictive Rendering’ (CHPR). This is an extension to Predictive Rendering that introduces a temporal component and addresses uncertainty that is important for the scene’s historical interpretation. To demonstrate these concepts, two example case studies are detailed.High‐fidelity rendering can be used to investigate Cultural Heritage (CH) sites in a scientifically rigorous manner. However, a high degree of realism in the reconstruction of a CH site can be misleading insofar as it can be seen to imply a high degree of certainty about the displayed scene‐which is frequently not the case, especially when investigating the past. So far, little effort has gone into adapting and formulating a Predictive Rendering pipeline for CH research applications. In this paper, we first discuss the goals and the workflow of CH reconstructions in general, as well as those of traditional Predictive Rendering. Based on this, we then propose a research framework for CH research, which we refer to as ‘Cultural Heritage Predictive Rendering’ (CHPR).Item REPORTS OF THE STATUTORY AUDITORS TO THE GENERAL MEETING OF THE MEMBERS OF EUROGRAPHICS ASSOCIATION, GENEVA(The Eurographics Association and Blackwell Publishing Ltd., 2012) Rushmeier, Holly; Deussen, Oliver; Holly Rushmeier and Oliver DeussenItem A Significance Cache for Accelerating Global Illumination(The Eurographics Association and Blackwell Publishing Ltd., 2012) Bashford-Rogers, Thomas; Debattista, Kurt; Chalmers, Alan; Holly Rushmeier and Oliver DeussenRendering using physically based methods requires substantial computational resources. Most methods that are physically based use straightforward techniques that may excessively compute certain types of light transport, while ignoring more important ones. Importance sampling is an effective and commonly used technique to reduce variance in such methods. Most current approaches for physically based rendering based on Monte Carlo methods sample the BRDF and cosine term, but are unable to sample the indirect illumination as this is the term that is being computed. Knowledge of the incoming illumination can be especially useful in the case of hard to find light paths, such as caustics or scenes which rely primarily on indirect illumination. To facilitate the determination of such paths, we propose a caching scheme which stores important directions, and is analytically sampled to calculate important paths. Results show an improvement over BRDF sampling and similar illumination importance sampling.Rendering using physically based methods requires substantial computational resources. Most methods that are physically based use straightforward techniques that may excessively compute certain types of light transport, while ignoring more important ones. Importance sampling is an effective and commonly used technique to reduce variance in such methods. Most current approaches for physically based rendering based on Monte Carlo methods sample the BRDF and cosine term, but are unable to sample the indirect illumination as this is the term that is being computed. Knowledge of the incoming illumination can be especially useful in the case of hard to find light paths, such as caustics or scenes which rely primarily on indirect illumination. To facilitate the determination of such paths, we propose a caching scheme which stores important directions, and is analytically sampled to calculate important paths.Item Selecting Coherent and Relevant Plots in Large Scatterplot Matrices(The Eurographics Association and Blackwell Publishing Ltd., 2012) Lehmann, Dirk J.; Albuquerque, Georgia; Eisemann, Martin; Magnor, Marcus; Theisel, Holger; Holly Rushmeier and Oliver DeussenThe scatterplot matrix (SPLOM) is a well‐established technique to visually explore high‐dimensional data sets. It is characterized by the number of scatterplots (plots) of which it consists of. Unfortunately, this number quadratically grows with the number of the data set’s dimensions. Thus, an SPLOM scales very poorly. Consequently, the usefulness of SPLOMs is restricted to a small number of dimensions. For this, several approaches already exist to explore such ‘small’ SPLOMs. Those approaches address the scalability problem just indirectly and without solving it. Therefore, we introduce a new greedy approach to manage ‘large’ SPLOMs with more than 100 dimensions. We establish a combined visualization and interaction scheme that produces intuitively interpretable SPLOMs by combining known quality measures, a pre‐process reordering and a perception‐based abstraction. With this scheme, the user can interactively find large amounts of relevant plots in large SPLOMs.The scatterplot matrix (SPLOM) is a well‐established technique to visually explore high‐dimensional data sets. It is characterized by the number of scatterplots (plots) of which it consists of. Unfortunately, this number quadratically grows with the number of the data set's dimensions. Thus, an SPLOM scales very poorly. Consequently, the usefulness of SPLOMs is restricted to a small number of dimensions. For this, several approaches already exist to explore such ‘small’ SPLOMs. Those approaches address the scalability problem just indirectly and without solving it. Therefore, we introduce a new greedy approach to manage ‘large’ SPLOMs with more than 100 dimensions. We establish a combined visualization and interaction scheme that produces intuitively interpretable SPLOMs by combining known quality measures, a pre‐process reordering and a perception‐based abstraction.Item Enhanced Texture‐Based Terrain Synthesis on Graphics Hardware(The Eurographics Association and Blackwell Publishing Ltd., 2012) Tasse, F. P.; Gain, J.; Marais, P.; Holly Rushmeier and Oliver DeussenCurvilinear features extracted from a 2D user‐sketched feature map have been used successfully to constraint a patch-based texture synthesis of real landscapes. This map-based user interface does not give fine control over the height profile of the generated terrain. We propose a new texture‐based terrain synthesis framework controllable by a terrain sketching interface. We enhance the realism of the generated landscapes by using a novel patch merging method that reduces boundary artefacts caused by overlapping terrain patches. A more constrained synthesis process is used to produce landscapes that better match user requirements. The high computational cost of texture synthesis is reduced with a parallel implementation on graphics hardware. Our GPU‐accelerated solution provides a significant speedup depending on the size of the example terrain. We show experimentally that our framework is more successful in generating realistic landscapes than current example‐based terrain synthesis methods. We conclude that texture‐based terrain synthesis combined with sketching provides an excellent solution to the user control and realism challenges of virtual landscape generation.Curvilinear features extracted from a 2D user‐sketched feature map have been used successfully to constraint a patch‐based texture synthesis of real landscapes. This map‐based user interface does not give fine control over the height profile of the generated terrain. We propose a new texture‐based terrain synthesis framework controllable by a terrain sketching interface. We enhance the realism of the generated landscapes by using a novel patch merging method that reduces boundary artefacts caused by overlapping terrain patches. A more constrained synthesis process is used to produce landscapes that better match user requirements. The high computational cost of texture synthesis is reduced with a parallel implementation on graphics hardware.Item In at the Deep End: An Activity-Led Introduction to First Year Creative Computing(The Eurographics Association and Blackwell Publishing Ltd., 2012) Anderson, E. F.; Peters, C. E.; Halloran, J.; Every, P.; Shuttleworth, J.; Liarokapis, F.; Lane, R.; Richards, M.; Holly Rushmeier and Oliver DeussenMisconceptions about the nature of the computing disciplines pose a serious problem to university faculties that offer computing degrees, as students enrolling on their programmes may come to realise that their expectations are not met by reality. This frequently results in the students’ early disengagement from the subject of their degrees which in turn can lead to excessive ‘wastage’, that is, reduced retention. In this paper, we report on our academic group’s attempts within creative computing degrees at a UK university to counter these problems through the introduction of a 6 week long project that newly enrolled students embark on at the very beginning of their studies. This group project, involving the creation of a 3D etch‐a‐sketch‐like computer graphics application with a hardware interface, provides a breadth‐first, activity‐led introduction to the students’ chosen academic discipline, aiming to increase student engagement while providing a stimulating learning experience with the overall goal to increase retention. We present the methods and results of two iterations of these projects in the 2009/2010 and 2010/2011 academic years, and conclude that the approach worked well for these cohorts, with students expressing increased interest in their chosen discipline, in addition to noticeable improvements in retention following the first year of the students’ studies.Misconceptions about the nature of the computing disciplines pose a serious problem to university faculties that offer computing degrees, as students enrolling on their programmes may come to realise that their expectations are not met by reality. This frequently results in the students' early disengagement from the subject of their degrees which in turn can lead to excessive ‘wastage’, that is, reduced retention. In this paper, we report on our academic group's attempts within creative computing degrees at a UK university to counter these problems through the introduction of a 6 week long project that newly enrolled students embark on at the very beginning of their studies. This group project, involving the creation of a 3D etch‐a‐sketch‐like computer graphics application with a hardware interface, provides a breadth‐first, activity‐led introduction to the students' chosen academic discipline, aiming to increase student engagement while providing a stimulating learning experience with the overall goal to increase retention.Item Editorial(The Eurographics Association and Blackwell Publishing Ltd., 2012) Holly Rushmeier and Oliver DeussenItem Adaptive Compression of Texture Pyramids(The Eurographics Association and Blackwell Publishing Ltd., 2012) Andujar, C.; Holly Rushmeier and Oliver DeussenHigh-quality texture minification techniques, including trilinear and anisotropic filtering, require texture data to be arranged into a collection of pre-filtered texture maps called mipmaps. In this paper, we present a compression scheme for mipmapped textures which achieves much higher quality than current native schemes by exploiting image coherence across mipmap levels. The basic idea is to use a high-quality native compressed format for the upper levels of the mipmap pyramid (to retain efficient minification filtering) together with a novel compact representation of the detail provided by the highest-resolution mipmap. Key elements of our approach include delta-encoding of the luminance signal, efficient encoding of coherent regions through texel runs following a Hilbert scan, a scheme for run encoding supporting fast random-access, and a predictive approach for encoding indices of variable-length blocks. We show that our scheme clearly outperforms native 6:1 compressed texture formats in terms of image quality while still providing real-time rendering of trilinearly filtered textures.High-quality texture minification techniques, including trilinear and anisotropic filtering, require texture data to be arranged into a collection of pre-filtered texture maps called mipmaps. In this paper we present a compression scheme for mipmapped textures which achieves much higher quality than current native schemes by exploiting image coherence across mipmap levels. The basic idea is to use a high-quality native compressed format for the upper levels of the mipmap pyramid (to retain efficient minification filtering) together with a novel compact representation of the detail provided by the highest-resolution mipmap.Item Perceptually Optimized Coded Apertures for Defocus Deblurring(The Eurographics Association and Blackwell Publishing Ltd., 2012) Masia, Belen; Presa, Lara; Corrales, Adrian; Gutierrez, Diego; Holly Rushmeier and Oliver DeussenThe field of computational photography, and in particular the design and implementation of coded apertures, has yielded impressive results in the last years. In this paper we introduce perceptually optimized coded apertures for defocused deblurring. We obtain near‐optimal apertures by means of optimization, with a novel evaluation function that includes two existing image quality perceptual metrics. These metrics favour results where errors in the final deblurred images will not be perceived by a human observer. Our work improves the results obtained with a similar approach that only takes into account the L2 metric in the evaluation function.The field of computational photography, and in particular the design and implementation of coded apertures, has yielded impressive results in the last years. In this paper we introduce perceptually optimized coded apertures for defocused deblurring. We obtain near‐optimal apertures by means of optimization, with a novel evaluation function that includes two existing image quality perceptual metrics. These metrics favour results where errors in the final deblurred images will not be perceived by a human observer. Our work improves the results obtained with a similar approach that only takes into account the L2 metric in the evaluation function.Item New EUROGRAPHICS Fellows(The Eurographics Association and Blackwell Publishing Ltd., 2012) Rushmeier, Holly; Deussen, Oliver; Holly Rushmeier and Oliver DeussenItem Multi‐Class Anisotropic Electrostatic Halftoning(The Eurographics Association and Blackwell Publishing Ltd., 2012) Schmaltz, C.; Gwosdek, P.; Weickert, J.; Holly Rushmeier and Oliver DeussenElectrostatic halftoning, a sampling algorithm based on electrostatic principles, is among the leading methods for stippling, dithering and sampling. However, this approach is only applicable for a single class of dots with a uniform size and colour. In our work, we complement these ideas by advanced features for real‐world applications. We propose a versatile framework for colour halftoning, hatching and multi‐class importance sampling with individual weights. Our novel approach is the first method that globally optimizes the distribution of different objects in varying sizes relative to multiple given density functions. The quality, versatility and adaptability of our approach is demonstrated in various experiments.Electrostatic halftoning, a sampling algorithm based on electrostatic principles, is among the leading methods for stippling, dithering and sampling. However, this approach is only applicable for a single class of dots with a uniform size and colour. In our work, we complement these ideas by advanced features for real‐world applications. We propose a versatile framework for colour halftoning, hatching and multi‐class importance sampling with individual weights. Our novel approach is the first method that globally optimizes the distribution of different objects in varying sizes relative to multiple given density functions. The quality, versatility and adaptability of our approach is demonstrated in various experiments.Item Parallel Surface Reconstruction for Particle-Based Fluids(The Eurographics Association and Blackwell Publishing Ltd., 2012) Akinci, G.; Ihmsen, M.; Akinci, N.; Teschner, M.; Holly Rushmeier and Oliver DeussenThis paper presents a novel method that improves the efficiency of high‐quality surface reconstructions for particle-based fluids using Marching Cubes. By constructing the scalar field only in a narrow band around the surface, the computational complexity and the memory consumption scale with the fluid surface instead of the volume. Furthermore, a parallel implementation of the method is proposed. The presented method works with various scalar field construction approaches. Experiments show that our method reconstructs high‐quality surface meshes efficiently even on single‐core CPUs. It scales nearly linearly on multi‐core CPUs and runs up to fifty times faster on GPUs compared to the original scalar field construction approaches.This paper presents a novel method that improves the efficiency of high‐quality surface reconstructions for particlebased fluids using Marching Cubes. By constructing the scalar field only in a narrow band around the surface, the computational complexity and the memory consumption scale with the fluid surface instead of the volume. Furthermore, a parallel implementation of the method is proposed. The presented method works with various scalar field construction approaches. Experiments show that our method reconstructs high‐quality surface meshes efficiently even on single‐core CPUs. It scales nearly linearly on multi‐core CPUs and runs up to fifty times faster on GPUs compared to the original scalar field construction approaches.Item Local Poisson SPH For Viscous Incompressible Fluids(The Eurographics Association and Blackwell Publishing Ltd., 2012) He, Xiaowei; Liu, Ning; Li, Sheng; Wang, Hongan; Wang, Guoping; Holly Rushmeier and Oliver DeussenEnforcing fluid incompressibility is one of the time‐consuming aspects in SPH. In this paper, we present a local Poisson SPH (LPSPH) method to solve incompressibility for particle based fluid simulation. Considering the pressure Poisson equation, we first convert it into an integral form, and then apply a discretization to convert the continuous integral equation to a discretized summation over all the particles in the local pressure integration domain determined by the local geometry. To control the approximation error, we further integrate our local pressure solver into the predictive‐corrective framework to avoid the computational cost of solving a pressure Poisson equation globally. Our method can effectively eliminate the large density deviations mainly caused by the solid boundary treatment and free surface topological change, and show advantage of a higher convergence rate over the predictive‐corrective incompressible SPH (PCISPH).Enforcing fluid incompressibility is one of the time‐consuming aspects in SPH. In this (paper, we present a local Poisson SPH (LPSPH) method to solve incompressibility for particle based fluid simulation. Considering the pressure Poisson equation, we first convert it into an integral form, and then apply a discretization to convert the continuous integral equation to a discretized summation over all the particles in the local pressure integration domain determined by the local geometry. To control the approximation error, we further integrate our local pressure solver into the predictive‐corrective framework to avoid the computational cost of solving a pressure Poisson equation globally. Our method can effectively eliminate the large density deviations mainly caused by the solid boundary treatment and free surface topological change, and show advantage of a higher convergence rate over the predictive‐corrective incompressible SPH (PCISPH).Item Improving Data Locality for Efficient In‐Core Path Tracing(The Eurographics Association and Blackwell Publishing Ltd., 2012) Bikker, J.; Holly Rushmeier and Oliver DeussenIn this paper, we investigate the efficiency of ray queries on the CPU in the context of path tracing, where ray distributions are mostly random. We show that existing schemes that exploit data locality to improve ray tracing efficiency fail to do so beyond the first diffuse bounce, and analyze the cause for this. We then present an alternative scheme inspired by the work of Pharr et al. in which we improve data locality by using a data‐centric breadth‐first approach. We show that our scheme improves on state‐of‐the‐art performance for ray distributions in a path tracer.In this paper, we investigate the efficiency of ray queries on the CPU in the context of path tracing, where ray distributions are mostly random. We show that existing schemes that exploit data locality to improve ray tracing efficiency fail to do so beyond the first diffuse bounce, and analyze the cause for this.We then present an alternative scheme inspired by the work of Pharr et al. in which we improve data locality by using a data‐centric breadth‐first approach.We show that our scheme improves on state‐of‐the‐art performance for ray distributions in a path tracer.Item Real‐Time Fluid Effects on Surfaces using the Closest Point Method(The Eurographics Association and Blackwell Publishing Ltd., 2012) Auer, S.; Macdonald, C. B.; Treib, M.; Schneider, J.; Westermann, R.; Holly Rushmeier and Oliver DeussenThe Closest Point Method (CPM) is a method for numerically solving partial differential equations (PDEs) on arbitrary surfaces, independent of the existence of a surface parametrization. The CPM uses a closest point representation of the surface, to solve the unmodified Cartesian version of a surface PDE in a 3D volume embedding, using simple and well‐understood techniques. In this paper, we present the numerical solution of the wave equation and the incompressible Navier‐Stokes equations on surfaces via the CPM, and we demonstrate surface appearance and shape variations in real‐time using this method. To fully exploit the potential of the CPM, we present a novel GPU realization of the entire CPM pipeline. We propose a surface‐embedding adaptive 3D spatial grid for efficient representation of the surface, and present a high‐performance approach using CUDA for converting surfaces given by triangulations into this representation. For real‐time performance, CUDA is also used for the numerical procedures of the CPM. For rendering the surface (and the PDE solution) directly from the closest point representation without the need to reconstruct a triangulated surface, we present a GPU ray‐casting method that works on the adaptive 3D grid.The Closest Point Method (CPM) is a method for numerically solving partial differential equations (PDEs) on arbitrary surfaces, independent of the existence of a surface parametrization. The CPM uses a closest point representation of the surface, to solve the unmodified Cartesian version of a surface PDE in a 3D volume embedding, using simple and well‐understood techniques. In this paper, we present the numerical solution of the wave equation and the incompressible Navier‐Stokes equations on surfaces via the CPM, and we demonstrate surface appearance and shape variations in real‐time using this method. To fully exploit the potential of the CPM, we present a novel GPU realization of the entire CPM pipeline.Item Feature-Preserving Displacement Mapping With Graphics Processing Unit (GPU) Tessellation(The Eurographics Association and Blackwell Publishing Ltd., 2012) Jang, Hanyoung; Han, JungHyun; Holly Rushmeier and Oliver DeussenDisplacement mapping reconstructs a high‐frequency surface by adding geometric details encoded in the displacement map to the coarse base surface. In the context of hardware tessellation supported by GPUs, this paper aims at feature‐preserving surface reconstruction, and proposes the generation of a displacement map that displaces more vertices towards the higher‐frequency feature parts of the target mesh. In order to generate the feature‐preserving displacement map, surface features of the target mesh are estimated, and then the target mesh is parametrized and sampled using the features. At run time, the base surface is semi‐uniformly tessellated by hardware, and then the vertices of the tessellated mesh are displaced non‐uniformly along the 3‐D vectors stored in the displacement map. The experimental results show that the surfaces reconstructed by the proposed method are of a higher quality than those reconstructed by other methods.Displacement mapping reconstructs a high‐frequency surface by adding geometric details encoded in the displacement map to the coarse base surface. In the context of hardware tessellation supported by GPUs, this paper aims at feature‐preserving surface reconstruction, and proposes the generation of a displacement map that displaces more vertices towards the higher‐frequency feature parts of the target mesh. In order to generate the feature‐preserving displacement map, surface features of the target mesh are estimated, and then the target mesh is parametrized and sampled using the features. At run time, the base surface is semi‐uniformly tessellated by hardware, and then the vertices of the tessellated mesh are displaced non‐uniformly along the 3‐D vectors stored in the displacement map.