Rendering 2024 - Symposium Track
Permanent URI for this collection
Browse
Browsing Rendering 2024 - Symposium Track by Issue Date
Now showing 1 - 16 of 16
Results Per Page
Sort Options
Item Real-Time Pixel-Perfect Hard Shadows with Leak Tracing(The Eurographics Association, 2024) Kern, René; Brüll, Felix; Grosch, Thorsten; Haines, Eric; Garces, ElenaAccurate shadows greatly enhance the realism of a rendered image. Shadow mapping is the preferred solution for shadows in real-time applications. However, shadow maps suffer from discretization errors and self-shadowing artifacts, that need custom parameter tuning per scene. Filterable shadow maps such as variance or moment shadow maps solve both issues but introduce light leaking. With the advent of hardware ray tracing, it becomes more realistic to use shadow rays instead of a shadow map. However, distributing a shadow ray is often more expensive than evaluating a shadow map, especially if the ray hits alphatested geometry. We introduce leak tracing, where we use filterable shadow maps techniques on top of default shadow maps and eliminate the light leaks and aliased shadow edges with selective ray tracing. Our algorithm does not need any scene-dependent parameters. We achieve an average speedup ranging from 1.19 to 1.79, with a top speedup of 4.17, depending on the scene and eliminate major performance drops caused by alpha-tested geometry during ray tracing. Our solution is temporally stable and reaches similar quality as pure ray tracing.Item Rendering 2024 Symposium Track: Frontmatter(The Eurographics Association, 2024) Garces, Elena; Haines, Eric; Haines, Eric; Garces, ElenaItem An Implementation Algorithm of 2D Sobol Sequence Fast, Elegant, and Compact(The Eurographics Association, 2024) Ahmed, Abdalla G. M.; Haines, Eric; Garces, ElenaWe present a novel algorithm to evaluate 2D Sobol samples, bringing the time complexity for m-bit resolution to O(log(m)) instead of O(m), thus gaining tangible performance boost. We take advantage of the geometric structure of the underlying Pascal matrix to factor it into diagonally-running matrices that are efficient to implement using bit-wise operations. We extend the method to inversion in global Sobol sampling. The algorithms form a flexible framework, able to generate several wellknown sample sequences as special cases. We compare the speed performance and memory footprint of our algorithms to state of the art implementations.Item Ray Traced Stochastic Depth Map for Ambient Occlusion(The Eurographics Association, 2024) Brüll, Felix; Kern, René; Grosch, Thorsten; Haines, Eric; Garces, ElenaScreen-space ambient occlusion is a popular technique for approximating global illumination in real-time rendering. However, it suffers from artifacts due to the lack of information from the depth buffer. A stochastic depth map [VSE21] can be used to retrieve most of the missing information, but it is not suitable for real-time rendering in large scenes. In this paper, we propose a new stochastic depth map acquisition method powered by hardware ray tracing, which shows better performance characteristics than the previous method. We present further improvements that increase the quality and performance of the stochastic depth map generation. Furthermore, the results are almost indistinguishable from a ground truth solution with all depth samples.Item Robust Cone Step Mapping(The Eurographics Association, 2024) Bán, Róbert; Valasek, Gábor; Bálint, Csaba; Vad, Viktor A.; Haines, Eric; Garces, ElenaPer-pixel displacement mapping provides an alternative to high-fidelity geometry and flat textured faces with in-between performance costs. Although cone maps are known to facilitate efficient and robust rendering of height fields, we show that these cannot guarantee robustness under bilinear interpolation, and we propose corrections to this issue. First, we define an artifactfree minimum step size for the cone map tracing algorithm while remaining comparable in performance to that of Dummer. Second, we modify the cone map generation procedure so that at bilinearly interpolated values the unbounding cones remain disjoint from the heightmap, thereby preventing another source of rendering artifacts. Third, we introduce an exact method to generate relaxed cones such that any ray within intersects the heightmap at most once, in contrast to the original algorithm that is both computationally more expensive and generates incorrect relaxed cones. Finally, we demonstrate the applicability of these algorithm improvements with visual and performance comparisons in our C++ and HLSL implementation.Item Precomputed Dynamic Appearance Synthesis and Rendering(The Eurographics Association, 2024) Bai, Yaoyi; Hasan, Miloš; Yan, Ling-Qi; Haines, Eric; Garces, ElenaInterpolation between objects of varying dimensionality is a common task in computer graphics; however, high-quality dynamic natural interpolation for appearance remains scarce. In this paper, we propose a blending framework for general appearances that can be integrated into renderers without modifying the rendering pipeline. For natural interpolation calculations, we use the mathematical tool optimal transport (OT), known for its promising blending quality. Although recent advancements in OT theory have improved computational performance, integrating runtime OT calculations into the path tracing rendering pipeline compromises algorithm efficiency and increases storage requirements. To address this, we propose a novel solution that precomputes appearances into a proxy distribution and introduces a hierarchical query structure. This enables efficient online point or range data querying, allowing for the generation or retrieval of large data sets as needed. Additionally, the proxy and hierarchical query structure facilitate multi-way barycenter computation. With this efficient query structure and barycentric calculation, we demonstrate several applications of our method, including 2D and 3D interpolation, as well as isotropic BRDF interpolation.Item Does Higher Refractive Index Mean Higher Gloss?(The Eurographics Association, 2024) Gigilashvili, Davit; Diaz Estrada, David Norman; Haines, Eric; Garces, ElenaAccording to Fresnel equations, the amount of specular reflection at the dielectric surface depends on two factors: incident angle and the difference in refractive indices of inner and outer media. Therefore, it is often assumed that the higher the refractive index of the material, the glossier it looks. However, gloss perception is a complex process that, in addition to specular reflectance, depends on many other factors, such as object's translucency and shape. In this study, we conducted two psychophysical experiments to quantify the impact of refractive index on perceived gloss for objects with varying degrees of translucency and surface roughness. For some objects a monotonic positive relationship between refractive index and perceived gloss was observed, while for others the relationship was found to be non-monotonic. Afterward, we evaluated how the refractive index affects image cues to gloss and tried to explain psychophysical results by image statistics.Item ReflectanceFusion: Diffusion-based text to SVBRDF Generation(The Eurographics Association, 2024) Xue, Bowen; Guarnera, Giuseppe Claudio; Zhao, Shuang; Montazeri, Zahra; Haines, Eric; Garces, ElenaWe introduce ReflectanceFusion (Reflectance Diffusion), a new neural text-to-texture model capable of generating high-fidelity SVBRDF maps from textual descriptions. Our method leverages a tandem neural approach, consisting of two modules, to accurately model the distribution of spatially varying reflectance as described by text prompts. Initially, we employ a pre-trained stable diffusion 2 model to generate a latent representation that informs the overall shape of the material and serves as our backbone model. Then, our ReflectanceUNet enables fine-tuning control over the material's physical appearance and generates SVBRDF maps. ReflectanceUNet module is trained on an extensive dataset comprising approximately 200,000 synthetic spatially varying materials. Our generative SVBRDF diffusion model allows for the synthesis of multiple SVBRDF estimates from a single textual input, offering users the possibility to choose the output that best aligns with their requirements. We illustrate our method's versatility by generating SVBRDF maps from a range of textual descriptions, both specific and broad. Our ReflectanceUNet model can integrate optional physical parameters, such as roughness and specularity, enhancing customization. When the backbone module is fixed, the ReflectanceUNet module refines the material, allowing direct edits to its physical attributes. Comparative evaluations demonstrate that ReflectanceFusion achieves better accuracy than existing text-to-material models, such as Text2Mat, while also providing the benefits of editable and relightable SVBRDF maps.Item Path Sampling Methods for Differentiable Rendering(The Eurographics Association, 2024) Su, Tanli; Gkioulekas, Ioannis; Haines, Eric; Garces, ElenaWe introduce a suite of path sampling methods for differentiable rendering of scene parameters that do not induce visibility-driven discontinuities, such as BRDF parameters. We begin by deriving a path integral formulation for differentiable rendering of such parameters, which we then use to derive methods that importance sample paths according to this formulation. Our methods are analogous to path tracing and path tracing with next event estimation for primal rendering, have linear complexity, and can be implemented efficiently using path replay backpropagation. Our methods readily benefit from differential BRDF sampling routines, and can be further enhanced using multiple importance sampling and a loss-aware pixel-space adaptive sampling procedure tailored to our path integral formulation. We show experimentally that our methods reduce variance in rendered gradients by potentially orders of magnitude, and thus help accelerate inverse rendering optimization of BRDF parameters.Item Employing Multiple Priors in Retinex-Based Low-Light Image Enhancement(The Eurographics Association, 2024) Yang, Weipeng; Gao, Hongxia; Liu, Tongtong; Ma, Jianliang; Zou, Wenbin; Huang, Shasha; Haines, Eric; Garces, ElenaIn the field of low-light image enhancement, images captured under low illumination suffer from severe noise and artifacts, which are often exacerbated during the enhancement process. Our method, grounded in the Retinex theory, tackles this challenge by recognizing that the illuminance component predominantly contains low-frequency image information, whereas the reflectance component encompasses high-frequency details, including noise. To effectively suppress noise in the reflectance without compromising detail, our method uniquely amalgamates global, local, and non-local priors. It utilizes the tensor train rank for capturing global features along with two plug-and-play denoisers: a convolutional neural network and a Color Block-Matching 3D filter (CBM3D), to preserve local details and non-local self-similarity. Furthermore, we employ Proximal AlternatingMinimization (PAM) and the Alternating DirectionMthod ofMultipliers (ADMM) algorithms to effectively separate the reflectance and illuminance components in the optimization process. Extensive experiments show that our model achieves superior or competitive results in both visual quality and quantitative metrics when compared with state-of-the-art methods. Our code is available at https://github.com/YangWeipengscut/GLON-Retinex.Item Constrained Spectral Uplifting for HDR Environment Maps(The Eurographics Association, 2024) Tódová, Lucia; Wilkie, Alexander; Haines, Eric; Garces, ElenaSpectral representation of assets is an important precondition for achieving physical realism in rendering. However, defining assets by their spectral distribution is complicated and tedious. Therefore, it has become general practice to create RGB assets and convert them into their spectral counterparts prior to rendering. This process is called spectral uplifting. While a multitude of techniques focusing on reflectance uplifting exist, the current state of the art of uplifting emission for image-based lighting consists of simply scaling reflectance uplifts. Although this is usable insofar as the obtained overall scene appearance is not unrealistic, the generated emission spectra are only metamers of the original illumination. This, in turn, can cause deviations from the expected appearance even if the rest of the scene corresponds to real world data. We propose a method capable of uplifting HDR environment maps based on spectral measurements of light sources similar to those present in the maps. To identify the illuminants, we employ an extensive set of emission measurements, and we combine the results with an existing reflectance uplifting method. In addition, we address the problem of environment map capture for the purposes of a spectral rendering pipeline, for which we propose a novel solution.Item ReSTIR FG: Real-Time Reservoir Resampled Photon Final Gathering(The Eurographics Association, 2024) Kern, René; Brüll, Felix; Grosch, Thorsten; Haines, Eric; Garces, ElenaAchieving real-time global illumination for a given scene remains challenging, even with the advent of hardware ray tracing, due to the substantial quantity of rays required. To enhance the quality of the limited number of samples, spatial and temporal resampling can be used. The concept of resampling gained popularity with ReSTIR DI [BWP*20], enabling real-time direct illumination for scenes with millions of lights. This concept was further extended by combining it with path tracing to quickly approximate the indirect illumination (ReSTIR GI [OLK*21]) or correctly approximate global illumination (ReSTIR PT [LKB*22] and Suffix ReSTIR [KLR*23]). However, these algorithms fall short in effectively rendering caustic effects -bundles of reflected or refracted light- often associated with photon mapping. We introduce ReSTIR FG, an efficient real time indirect illumination algorithm that combines photon final gathering with the principles of ReSTIR. First, we introduce an efficient photon final gathering scheme, enabling quick consistent offline rendering. Then we combine our photon final gathering with spatiotemporal resampling to allow for real time global illumination. Our algorithm is capable of displaying multi bounce indirect illumination, as well as caustic effects, while remaining competitive in both runtime and quality when compared to the aforementioned state-of-the-art global illumination resampling techniques.Item Estimating Uncertainty in Appearance Acquisition(The Eurographics Association, 2024) Zhou, Zhiqian; Zhang, Cheng; Dong, Zhao; Marshall, Carl; Zhao, Shuang; Haines, Eric; Garces, ElenaThe inference of material reflectance from physical observations (e.g., photographs) is usually under-constrained, causing point estimates to suffer from ambiguity and, thus, generalize poorly to novel configurations. Conventional methods address this problem by using dense observations or introducing priors. In this paper, we tackle this problem from a different angle by introducing a method to quantify uncertainties. Based on a Bayesian formulation, our method can quantitatively analyze how under-constrained a material inference problem is (given the observations and priors), by sampling the entire posterior distribution of material parameters rather than optimizing a single point estimate as given by most inverse rendering methods. Further, we present a method to guide acquisition processes by recommending viewing/lighting configurations for making additional observations. We demonstrate the usefulness of our technique using several synthetic and one real example.Item Computing Manifold Next-Event Estimation without Derivatives using the Nelder-Mead Method(The Eurographics Association, 2024) Granizo-Hidalgo, Ana; Holzschuch, Nicolas; Haines, Eric; Garces, ElenaSpecular surfaces, by focusing the light that is being reflected or refracted, cause bright spots in the scene, called caustics. These caustics are challenging to compute for global illumination algorithms. Manifold-based methods (Manifold Exploration, Manifold Next-Event Estimation, Specular Next Event Estimation) compute these caustics as the zeros of an objective function, using the Newton-Raphson method. They are efficient, but require computing the derivatives of the objective function, which in turn requires local surface derivatives around the reflection point, which can be challenging to implement. In this paper, we leverage the Nelder-Mead method to compute caustics using Manifold Next-Event Estimation without having to compute local derivatives. Our method only requires local evaluations of the objective function, making it an easy addition to any path-tracing algorithm.Item High Quality Neural Relighting using Practical Zonal Illumination(The Eurographics Association, 2024) Lin, Arvin; Lin, Yiming; Li, Xiaohui; Ghosh, Abhijeet; Haines, Eric; Garces, ElenaWe present a method for high-quality image-based relighting using a practical limited zonal illumination field. Our setup can be implemented with commodity components with no dedicated hardware. We employ a set of desktop monitors to illuminate a subject from a near-hemispherical zone and record One-Light-At-A-Time (OLAT) images from multiple viewpoints. We further extrapolate sampling of incident illumination directions beyond the frontal coverage of the monitors by repeating OLAT captures with the subject rotation in relation to the capture setup. Finally, we train our proposed skip-assisted autoencoder and latent diffusion based generative method to learn a high-quality continuous representation of the reflectance function without requiring explicit alignment of the data captured from various viewpoints. This method enables smooth lighting animation for high-frequency reflectance functions and effectively manages to extend incident lighting beyond the practical capture setup's illumination zone. Compared to state-of-the-art methods, our approach achieves superior image-based relighting results, capturing finer skin pore details and extending to passive performance video relighting.Item Learning Self-Shadowing for Clothed Human Bodies(The Eurographics Association, 2024) Einabadi, Farshad; Guillemaut, Jean-Yves; Hilton, Adrian; Haines, Eric; Garces, ElenaThis paper proposes to learn self-shadowing on full-body, clothed human postures from monocular colour image input, by supervising a deep neural model. The proposed approach implicitly learns the articulated body shape in order to generate self-shadow maps without seeking to reconstruct explicitly or estimate parametric 3D body geometry. Furthermore, it is generalisable to different people without per-subject pre-training, and has fast inference timings. The proposed neural model is trained on self-shadow maps rendered from 3D scans of real people for various light directions. Inference of shadow maps for a given illumination is performed from only 2D image input. Quantitative and qualitative experiments demonstrate comparable results to the state of the art whilst being monocular and achieving a considerably faster inference time. We provide ablations of our methodology and further show how the inferred self-shadow maps can benefit monocular full-body human relighting.