EGWR: Eurographics Workshop on Rendering
Permanent URI for this community
Browse
Browsing EGWR: Eurographics Workshop on Rendering by Title
Now showing 1 - 20 of 586
Results Per Page
Sort Options
Item Accelerating Hair Rendering by Learning High-Order Scattered Radiance(The Eurographics Association and John Wiley & Sons Ltd., 2023) KT, Aakash; Jarabo, Adrian; Aliaga, Carlos; Chiang, Matt Jen-Yuan; Maury, Olivier; Hery, Christophe; Narayanan, P. J.; Nam, Giljoo; Ritschel, Tobias; Weidlich, AndreaEfficiently and accurately rendering hair accounting for multiple scattering is a challenging open problem. Path tracing in hair takes long to converge while other techniques are either too approximate while still being computationally expensive or make assumptions about the scene. We present a technique to infer the higher order scattering in hair in constant time within the path tracing framework, while achieving better computational efficiency. Our method makes no assumptions about the scene and provides control over the renderer's bias & speedup. We achieve this by training a small multilayer perceptron (MLP) to learn the higher-order radiance online, while rendering progresses. We describe how to robustly train this network and thoroughly analyze our resulting renderer's characteristics. We evaluate our method on various hairstyles and lighting conditions. We also compare our method against a recent learning based & a traditional real-time hair rendering method and demonstrate better quantitative & qualitative results. Our method achieves a significant improvement in speed with respect to path tracing, achieving a run-time reduction of 40%-70% while only introducing a small amount of bias.Item Accelerating Path Tracing by Re-Using Paths(The Eurographics Association, 2002) Bekaert, Philippe; Sbert, Mateu; Halton, John; P. Debevec and S. GibsonThis paper describes a new acceleration technique for rendering algorithms like path tracing, that use so called gathering random walks. Usually in path tracing, each traced path is used in order to compute a contribution to only a single point on the virtual screen. We propose to combine paths traced through nearby screen points in such a way that each path contributes to multiple screen points in a provably good way. Our approach is unbiased and is not restricted to diffuse light scattering. It complements previous image noise reduction techniques for Monte Carlo ray tracing. We observe speed-ups in the computation of indirect illumination of one order of magnitude.Item Accelerating Ray Tracing using Constrained Tetrahedralizations(The Eurographics Association and Blackwell Publishing Ltd, 2008) Lagae, Ares; Dutre, PhilipIn this paper we introduce the constrained tetrahedralization as a new acceleration structure for ray tracing. A constrained tetrahedralization of a scene is a tetrahedralization that respects the faces of the scene geometry. The closest intersection of a ray with a scene is found by traversing this tetrahedralization along the ray, one tetrahedron at a time. We show that constrained tetrahedralizations are a viable alternative to current acceleration structures, and that they have a number of unique properties that set them apart from other acceleration structures: constrained tetrahedralizations are not hierarchical yet adaptive; the complexity of traversing them is a function of local geometric complexity rather than global geometric complexity; constrained tetrahedralizations support deforming geometry without any effort; and they have the potential to unify several data structures currently used in global illumination.Item Accurate Fitting of Measured Reflectances Using a Shifted Gamma Micro-facet Distribution(The Eurographics Association and Blackwell Publishing Ltd., 2012) Bagher, Mohammad Mahdi; Soler, Cyril; Holzschuch, Nicolas; Fredo Durand and Diego GutierrezMaterial models are essential to the production of photo-realistic images. Measured BRDFs provide accurate representation with complex visual appearance, but have larger storage cost. Analytical BRDFs such as Cook- Torrance provide a compact representation but fail to represent the effects we observe with measured appearance. Accurately fitting an analytical BRDF to measured data remains a challenging problem. In this paper we introduce the SGD micro-facet distribution for Cook-Torrance BRDF. This distribution accurately models the behavior of most materials. As a consequence, we accurately represent all measured BRDFs using a single lobe. Our fitting procedure is stable and robust, and does not require manual tweaking of the parameters.Item Acquisition and Rendering of Transparent and Refractive Objects(The Eurographics Association, 2002) Matusik, Wojciech; Pfister, Hanspeter; Ziegler, Remo; Ngan, Addy; McMillan, Leonard; P. Debevec and S. GibsonThis paper introduces a new image-based approach to capturing and modeling highly specular, transparent, or translucent objects. We have built a system for automatically acquiring high quality graphical models of objects that are extremely difficult to scan with traditional 3D scanners. The system consists of turntables, a set of cameras and lights, and monitors to project colored backdrops. We use multi-background matting techniques to acquire alpha and environment mattes of the object from multiple viewpoints. Using the alpha mattes we reconstruct an approximate 3D shape of the object. We use the environment mattes to compute a high-resolution surface reflectance field. We also acquire a low-resolution surface reflectance field using the overhead array of lights. Both surface reflectance fields are used to relight the objects and to place them into arbitrary environments. Our system is the first to acquire and render transparent and translucent 3D objects, such as a glass of beer, from arbitrary viewpoints under novel illumination.Item Acquisition and Validation of Spectral Ground Truth Data for Predictive Rendering of Rough Surfaces(The Eurographics Association and John Wiley & Sons Ltd., 2018) Clausen, Olaf; Marroquim, Ricardo; Fuhrmann, Arnulph; Jakob, Wenzel and Hachisuka, ToshiyaPhysically based rendering uses principles of physics to model the interaction of light with matter. Even though it is possible to achieve photorealistic renderings, it often fails to be predictive. There are two major issues: first, there is no analytic material model that considers all appearance critical characteristics; second, light is in many cases described by only 3 RGB-samples. This leads to the problem that there are different models for different material types and that wavelength dependent phenomena are only approximated. In order to be able to analyze the influence of both problems on the appearance of real world materials, an accurate comparison between rendering and reality is necessary. Therefore, in this work, we acquired a set of precisely and spectrally resolved ground truth data. It consists of the precise description of a new developed reference scene including isotropic BRDFs of 24 color patches, as well as the reference measurements of all patches under 13 different angles inside the reference scene. Our reference data covers rough materials with many different spectral distributions and various illumination situations, from direct light to indirect light dominated situations.Item An Adaptive BRDF Fitting Metric(The Eurographics Association and John Wiley & Sons Ltd., 2020) Bieron, James; Peers, Pieter; Dachsbacher, Carsten and Pharr, MattWe propose a novel image-driven fitting strategy for isotropic BRDFs. Whereas existing BRDF fitting methods minimize a cost function directly on the error between the fitted analytical BRDF and the measured isotropic BRDF samples, we also take into account the resulting material appearance in visualizations of the BRDF. This change of fitting paradigm improves the appearance reproduction fidelity, especially for analytical BRDF models that lack the expressiveness to reproduce the measured surface reflectance. We formulate BRDF fitting as a two-stage process that first generates a series of candidate BRDF fits based only on the BRDF error with measured BRDF samples. Next, from these candidates, we select the BRDF fit that minimizes the visual error. We demonstrate qualitatively and quantitatively improved fits for the Cook-Torrance and GGX microfacet BRDF models. Furthermore, we present an analysis of the BRDF fitting results, and show that the image-driven isotropic BRDF fits generalize well to other light conditions, and that depending on the measured material, a different weighting of errors with respect to the measured BRDF is necessary.Item Adaptive BRDF-Oriented Multiple Importance Sampling of Many Lights(The Eurographics Association and John Wiley & Sons Ltd., 2019) Liu, Yifan; Xu, Kun; Yan, Ling-Qi; Boubekeur, Tamy and Sen, PradeepMany-light rendering is becoming more common and important as rendering goes into the next level of complexity. However, to calculate the illumination under many lights, state of the art algorithms are still far from efficient, due to the separate consideration of light sampling and BRDF sampling. To deal with the inefficiency of many-light rendering, we present a novel light sampling method named BRDF-oriented light sampling, which selects lights based on importance values estimated using the BRDF's contributions. Our BRDF-oriented light sampling method works naturally with MIS, and allows us to dynamically determine the number of samples allocated for different sampling techniques. With our method, we can achieve a significantly faster convergence to the ground truth results, both perceptually and numerically, as compared to previous many-light rendering algorithms.Item Adaptive Frameless Rendering(The Eurographics Association, 2005) Dayal, Abhinav; Woolley, Cliff; Watson, Benjamin; Luebke, David; Kavita Bala and Philip DutreWe propose an adaptive form of frameless rendering with the potential to dramatically increase rendering speed over conventional interactive rendering approaches. Without the rigid sampling patterns of framed renderers, sampling and reconstruction can adapt with very fine granularity to spatio-temporal color change. A sampler uses closed-loop feedback to guide sampling toward edges or motion in the image. Temporally deep buffers store all the samples created over a short time interval for use in reconstruction and as sampler feedback. GPU-based reconstruction responds both to sampling density and space-time color gradients. Where the displayed scene is static, spatial color change dominates and older samples are given significant weight in reconstruction, resulting in sharper and eventually antialiased images. Where the scene is dynamic, more recent samples are emphasized, resulting in less sharp but more up-to-date images. We also use sample reprojection to improve reconstruction and guide sampling toward occlusion edges, undersampled regions, and specular highlights. In simulation our frameless renderer requires an order of magnitude fewer samples than traditional rendering of similar visual quality (as measured by RMS error), while introducing overhead amounting to 15% of computation time.Item Adaptive Image-Space Sampling for Gaze-Contingent Real-time Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2016) Stengel, Michael; Grogorick, Steve; Eisemann, Martin; Magnor, Marcus; Elmar Eisemann and Eugene FiumeWith ever-increasing display resolution for wide field-of-view displays-such as head-mounted displays or 8k projectors- shading has become the major computational cost in rasterization. To reduce computational effort, we propose an algorithm that only shades visible features of the image while cost-effectively interpolating the remaining features without affecting perceived quality. In contrast to previous approaches we do not only simulate acuity falloff but also introduce a sampling scheme that incorporates multiple aspects of the human visual system: acuity, eye motion, contrast (stemming from geometry, material or lighting properties), and brightness adaptation. Our sampling scheme is incorporated into a deferred shading pipeline to shade the image's perceptually relevant fragments while a pull-push algorithm interpolates the radiance for the rest of the image. Our approach does not impose any restrictions on the performed shading. We conduct a number of psycho-visual experiments to validate scene- and task-independence of our approach. The number of fragments that need to be shaded is reduced by 50 % to 80 %. Our algorithm scales favorably with increasing resolution and field-of-view, rendering it well-suited for head-mounted displays and wide-field-of-view projection.Item Adaptive Matrix Completion for Fast Visibility Computations with Many Lights Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2020) Wang, Sunrise; Holzschuch, Nicolas; Dachsbacher, Carsten and Pharr, MattSeveral fast global illumination algorithms rely on the Virtual Point Lights framework. This framework separates illumination into two steps: first, propagate radiance in the scene and store it in virtual lights, then gather illumination from these virtual lights. To accelerate the second step, virtual lights and receiving points are grouped hierarchically, for example using Multi- Dimensional Lightcuts. Computing visibility between clusters of virtual lights and receiving points is a bottleneck. Separately, matrix completion algorithms reconstruct completely a low-rank matrix from an incomplete set of sampled elements. In this paper, we use adaptive matrix completion to approximate visibility information after an initial clustering step. We reconstruct visibility information using as little as 10%to 20%samples for most scenes, and combine it with shading information computed separately, in parallel on the GPU. Overall, our method computes global illumination 3 or more times faster than previous stateof- the-art methods.Item Adaptive Numerical Cumulative Distribution Functions for Efficient Importance Sampling(The Eurographics Association, 2005) Lawrence, Jason; Rusinkiewicz, Szymon; Ramamoorthi, Ravi; Kavita Bala and Philip DutreAs image-based surface reflectance and illumination gain wider use in physically-based rendering systems, it is becoming more critical to provide representations that allow sampling light paths according to the distribution of energy in these high-dimensional measured functions. In this paper, we apply algorithms traditionally used for curve approximation to reduce the size of a multidimensional tabulated Cumulative Distribution Function (CDF) by one to three orders of magnitude without compromising its fidelity. These adaptive representations enable new algorithms for sampling environment maps according to the local orientation of the surface and for multiple importance sampling of image-based lighting and measured BRDFs.Item Adaptive Temporal Sampling for Volumetric Path Tracing of Medical Data(The Eurographics Association and John Wiley & Sons Ltd., 2019) Martschinke, Jana; Hartnagel, Stefan; Keinert, Benjamin; Engel, Klaus; Stamminger, Marc; Boubekeur, Tamy and Sen, PradeepMonte-Carlo path tracing techniques can generate stunning visualizations of medical volumetric data. In a clinical context, such renderings turned out to be valuable for communication, education, and diagnosis. Because a large number of computationally expensive lighting samples is required to converge to a smooth result, progressive rendering is the only option for interactive settings: Low-sampled, noisy images are shown while the user explores the data, and as soon as the camera is at rest the view is progressively refined. During interaction, the visual quality is low, which strongly impedes the user's experience. Even worse, when a data set is explored in virtual reality, the camera is never at rest, leading to constantly low image quality and strong flickering. In this work we present an approach to bring volumetric Monte-Carlo path tracing to the interactive domain by reusing samples over time. To this end, we transfer the idea of temporal antialiasing from surface rendering to volume rendering. We show how to reproject volumetric ray samples even though they cannot be pinned to a particular 3D position, present an improved weighting scheme that makes longer history trails possible, and define an error accumulation method that downweights less appropriate older samples. Furthermore, we exploit reprojection information to adaptively determine the number of newly generated path tracing samples for each individual pixel. Our approach is designed for static, medical data with both volumetric and surface-like structures. It achieves good-quality volumetric Monte-Carlo renderings with only little noise, and is also usable in a VR context.Item Adaptive Visibility-Driven View Cell Construction(The Eurographics Association, 2006) Mattausch, Oliver; Bittner, JirÃ; Wimmer, Michael; Tomas Akenine-Moeller and Wolfgang HeidrichWe present a new method for the automatic partitioning of view space into a multi-level view cell hierarchy. We use a cost-based model in order to minimize the average rendering time. Unlike previous methods, our model takes into account the actual visibility in the scene, and the partition is not restricted to planes given by the scene geometry. We show that the resulting view cell hierarchy works for different types of scenes and gives lower average rendering time than previously used methods.Item Adaptive Volumetric Shadow Maps(The Eurographics Association and Blackwell Publishing Ltd, 2010) Salvi, Marco; Vidimce, Kiril; Lauritzen, Andrew; Lefohn, AaronWe introduce adaptive volumetric shadow maps (AVSM), a real-time shadow algorithm that supports high-quality shadowing from dynamic volumetric media such as hair and smoke. The key contribution of AVSM is the introduction of a streaming simplification algorithm that generates an accurate volumetric light attenuation function using a small fixed memory footprint. This compression strategy leads to high performance because the visibility data can remain in on-chip memory during simplification and can be efficiently sampled during rendering. We demonstrate that AVSM compression closely approximates the ground-truth correct solution and performs competitively to existing real-time rendering techniques while providing higher quality volumetric shadows.Item Alias-Free Shadow Maps(The Eurographics Association, 2004) Aila, Timo; Laine, Samuli; Alexander Keller and Henrik Wann JensenIn this paper we abandon the regular structure of shadow maps. Instead, we transform the visible pixels P(x, y, z) from screen space to the image plane of a light source P0(x0, y0, z0). The (x0, y0) are then used as sampling points when the geometry is rasterized into the shadow map. This eliminates the resolution issues that have plagued shadow maps for decades, e.g., jagged shadow boundaries. Incorrect self-shadowing is also greatly reduced, and semi-transparent shadow casters and receivers can be supported. A hierarchical software implementation is outlinedItem All-focused light field rendering(The Eurographics Association, 2004) Kubota, Akira; Takahashi, Keita; Aizawa, Kiyoharu; Chen, Tsuhan; Alexander Keller and Henrik Wann JensenWe present a novel reconstruction method that can synthesize an all in-focus view from under-sampled light fields, significantly suppressing aliasing artifacts. The presented method consists of two steps; 1) rendering multiple views at a given view point by performing light field rendering with different focal plane depths; 2) iteratively reconstructing the all in-focus view by fusing the multiple views. We model the multiple views and the desired all in-focus view as a set of linear equations with a combination of textures at the focal depths. Aliasing artifacts can be modeled as spatially (shift) varying filters. We can solve this set of linear equations by using an iterative reconstruction approach. This method effectively integrates focused regions in each view into an all in-focus view without any local processing steps such as estimation of depth or segmentation of the focused regions.Item All-Frequency Precomputed Radiance Transfer for Glossy Objects(The Eurographics Association, 2004) Liu, Xinguo; Sloan, Peter-Pike; Shum, Heung-Yeung; Snyder, John; Alexander Keller and Henrik Wann JensenWe introduce a method based on precomputed radiance transfer (PRT) that allows interactive rendering of glossy surfaces and includes shadowing effects from dynamic, "all-frequency" lighting. Specifically, source lighting is represented by a cube map at resolution nLItem All-Frequency Relighting of Non-Diffuse Objects using Separable BRDF Approximation(The Eurographics Association, 2004) Wang, Rui; Tran, John; Luebke, David; Alexander Keller and Henrik Wann JensenThis paper presents a technique, based on pre-computed light transport and separable BRDF approximation, for interactive rendering of non-diffuse objects under all-frequency environment illumination. Existing techniques using spherical harmonics to represent environment maps and transport functions are limited to low-frequency light transport effects. Non-linear wavelet lighting approximation is able to capture all-frequency illumination and shadows for geometry relighting, but interactive rendering is currently limited to diffuse objects. Our work extends the wavelet-based approach to relighting of non-diffuse objects. We factorize the BRDF using separable decomposition and keep only a few low-order approximation terms, each consisting of a 2D light map paired with a 2D view map. We then pre-compute light transport matrices corresponding to each BRDF light map, and compress the data with a non-linear wavelet approximation. We use modern graphics hardware to accelerate precomputation. At run-time, a sparse light vector is multiplied by the sparse transport matrix at each vertex, and the results are further combined with texture lookups of the view direction into the BRDF view maps to produce view-dependent color. Using our technique, we demonstrate rendering of objects with several non-diffuse BRDFs under all-frequency, dynamic environment lighting at interactive rates.Item Ambient Occlusion for Animated Characters(The Eurographics Association, 2006) Kontkanen, Janne; Aila, Timo; Tomas Akenine-Moeller and Wolfgang HeidrichWe present a novel technique for approximating ambient occlusion of animated objects. Our method automatically determines the correspondence between animation parameters and per-vertex ambient occlusion using a set of reference poses as its input. Then, at runtime, the ambient occlusion is approximated by taking a dot product between the current animation parameters and static per-vertex coefficients. According to our results, both the computational and storage requirements are low enough for the technique to be directly applicable to computer games running on current graphics hardware. The resulting images are also significantly more realistic than the commonly used static ambient occlusion solutions.