Rendering 2023 - Symposium Track
Permanent URI for this collection
Browse
Browsing Rendering 2023 - Symposium Track by Subject "Computing methodologies"
Now showing 1 - 10 of 10
Results Per Page
Sort Options
Item Fast Procedural Noise By Monte Carlo Sampling(The Eurographics Association, 2023) Fajardo, Marcos; Pharr, Matt; Ritschel, Tobias; Weidlich, AndreaProcedural noise functions are widely used in computer graphics as a way to add texture detail to surfaces and volumes. Many noise functions are based on weighted sums that can be expressed in terms of random variables, which makes it possible to compute Monte Carlo estimates of their values at lower cost. Such stochastic noise functions fit naturally into many Monte Carlo estimators already used in rendering. Leveraging the dense image-plane sampling in modern path tracing renderers, we show that stochastic evaluation allows the use of procedural noise at a fraction of its full cost with little additional error.Item Floaters No More: Radiance Field Gradient Scaling for Improved Near-Camera Training(The Eurographics Association, 2023) Philip, Julien; Deschaintre, Valentin; Ritschel, Tobias; Weidlich, AndreaNeRF acquisition typically requires careful choice of near planes for the different cameras or suffers from background collapse, creating floating artifacts on the edges of the captured scene. The key insight of this work is that background collapse is caused by a higher density of samples in regions near cameras. As a result of this sampling imbalance, near-camera volumes receive significantly more gradients, leading to incorrect density buildup. We propose a gradient scaling approach to counter-balance this sampling imbalance, removing the need for near planes, while preventing background collapse. Our method can be implemented in a few lines, does not induce any significant overhead, and is compatible with most NeRF implementations.Item FloralSurf: Space-Filling Geodesic Ornaments(The Eurographics Association, 2023) Albano, Valerio; Fanni, Filippo Andrea; Giachetti, Andrea; Pellacini, Fabio; Ritschel, Tobias; Weidlich, AndreaWe propose a method to generate floral patterns on manifolds without relying on parametrizations. Taking inspiration from the literature on procedural space-filling vegetation, these patterns are made of non-intersecting ornaments that are grown on the surface by repeatedly adding different types of decorative elements, until the whole surface is covered. Each decorative element is defined by a set of geodesic Bézier splines and a set of growth points from which to continue growing the ornaments. Ornaments are grown in a greedy fashion, one decorative element at a time. At each step, we analyze a set of candidates, and retain the one that maximizes surface coverage, while ensuring that it does not intersect other ornaments. All operations in our method are performed in the intrinsic metric of the surface, thus ensuring that the derived decorations have good coverage, with neither distortions nor discontinuities, and can be grown on complex surfaces. In our method, users control the decorations by selecting the size and shape of the decorative elements and the position of the growth points.We demonstrate decorations that vary in the length of the ornaments' lines, and the number, scale and orientation of the placed decorations. We show that these patterns mimic closely the design of hand-drawn objects. Our algorithm supports any manifold surface represented as triangle meshes. In particular, we demonstrate patterns generated on surfaces with high genus, with and without borders and holes, and that can include a mixture of thin and large features.Item Gaze-Contingent Perceptual Level of Detail Prediction(The Eurographics Association, 2023) Surace, Luca; Tursun, Cara; Celikcan, Ufuk; Didyk, Piotr; Ritschel, Tobias; Weidlich, AndreaNew virtual reality headsets and wide field-of-view displays rely on foveated rendering techniques that lower the rendering quality for peripheral vision to increase performance without a perceptible quality loss. While the concept is simple, the practical realization of the foveated rendering systems and their full exploitation are still challenging. Existing techniques focus on modulating the spatial resolution of rendering or shading rate according to the characteristics of human perception. However, most rendering systems also have a significant cost related to geometry processing. In this work, we investigate the problem of mesh simplification, also known as the level of detail (LOD) technique, for foveated rendering. We aim to maximize the amount of LOD simplification while keeping the visibility of changes to the object geometry under a selected threshold. We first propose two perceptually inspired visibility models for mesh simplification suitable for gaze-contingent rendering. The first model focuses on spatial distortions in the object silhouette and body. The second model accounts for the temporal visibility of switching between two LODs. We calibrate the two models using data from perceptual experiments and derive a computational method that predicts a suitable LOD for rendering an object at a specific eccentricity without objectionable quality loss. We apply the technique to the foveated rendering of static and dynamic objects and demonstrate the benefits in a validation experiment. Using our perceptually-driven gaze-contingent LOD selection, we achieve up to 33% of extra speedup in rendering performance of complex-geometry scenes when combined with the most recent industrial solutions, i.e., Nanite from Unreal Engine.Item Learning Projective Shadow Textures for Neural Rendering of Human Cast Shadows from Silhouettes(The Eurographics Association, 2023) Einabadi, Farshad; Guillemaut, Jean-Yves; Hilton, Adrian; Ritschel, Tobias; Weidlich, AndreaThis contribution introduces a two-step, novel neural rendering framework to learn the transformation from a 2D human silhouette mask to the corresponding cast shadows on background scene geometries. In the first step, the proposed neural renderer learns a binary shadow texture (canonical shadow) from the 2D foreground subject, for each point light source, independent of the background scene geometry. Next, the generated binary shadows are texture-mapped to transparent virtual shadow map planes which are seamlessly used in a traditional rendering pipeline to project hard or soft shadows for arbitrary scenes and light sources of different sizes. The neural renderer is trained with shadow images rendered from a fast, scalable, synthetic data generation framework. We introduce the 3D Virtual Human Shadow (3DVHshadow) dataset as a public benchmark for training and evaluation of human shadow generation. Evaluation on the 3DVHshadow test set and real 2D silhouette images of people demonstrates the proposed framework achieves comparable performance to traditional geometry-based renderers without any requirement for knowledge or computationally intensive, explicit estimation of the 3D human shape. We also show the benefit of learning intermediate canonical shadow textures, compared to learning to generate shadows directly in camera image space. Further experiments are provided to evaluate the effect of having multiple light sources in the scene, model performance with regard to the relative camera-light 2D angular distance, potential aliasing artefacts related to output image resolution, and effect of light sources' dimensions on shadow softness.Item Mean Value Caching for Walk on Spheres(The Eurographics Association, 2023) Bakbouk, Ghada; Peers, Pieter; Ritschel, Tobias; Weidlich, AndreaWalk on Spheres (WoS) is a grid-free Monte Carlo method for numerically estimating solutions for elliptical partial differential equations (PDE) such as the Laplace and Poisson PDEs. While WoS is efficient for computing a solution value at a single evaluation point, it becomes less efficient when the solution is required over a whole domain or a region of interest. WoS computes a solution for each evaluation point separately, possibly recomputing similar sub-walks multiple times over multiple evaluation points. In this paper, we introduce a novel filtering and caching strategy that leverages the volume mean value property (in contrast to the boundary mean value property that forms the core of WoS). In addition, to improve quality under sparse cache regimes, we describe a weighted mean as well as a non-uniform sampling method. Finally, we show that we can reduce the variance within the cache by recursively applying the volume mean value property on the cached elements.Item A Microfacet Model for Specular Fluorescent Surfaces and Fluorescent Volume Rendering using Quantum Dots(The Eurographics Association, 2023) Benamira, Alexis; Pattanaik, Sumant; Ritschel, Tobias; Weidlich, AndreaFluorescent appearance of materials results from a complex light-material interaction phenomenon. The modeling of fluorescent material for rendering has only been addressed through measurement or for simple diffuse reflections, thus limiting the range of possible representable appearances. In this work, we introduce and model a fluorescent nanoparticle called a Quantum Dot (QD) for rendering. Our modeling of the Quantum Dots serves as a foundation to support two physically based rendering applications. First a fluorescent volumetric scattering model and second, the definition of a fluorescent specular microfacet scattering model. For the latter, we model the Fresnel energy reflection coefficient of a QD coated microfacet assuming specular fluorescence, thus making our approach easily integrable with any microfacet reflection model.Item pEt: Direct Manipulation of Differentiable Vector Patterns(The Eurographics Association, 2023) Riso, Marzia; Pellacini, Fabio; Ritschel, Tobias; Weidlich, AndreaProcedural assets are used in computer graphics applications since variations can be obtained by changing the parameters of the procedural programs. As the number of parameters increases, editing becomes cumbersome as users have to manually navigate a large space of choices. Many methods in the literature have been proposed to estimate parameters from example images, which works well for initial starting points. For precise edits, inverse manipulation approaches let users manipulate the output asset interactively, while the system determines the procedural parameters. In this work, we focus on editing procedural vector patterns, which are collections of vector primitives generated by procedural programs. Recent work has shown how to estimate procedural parameters from example images and sketches, that we complement here by proposing a method for direct manipulation. In our work, users select and interactively transform a set of shape points, while also constraining other selected points. Our method then optimizes for the best pattern parameters using gradient-based optimization of the differentiable procedural functions. We support edits on large variety of patterns with different shapes, symmetries, continuous and discrete parameters, and with or without occlusions.Item Practical Temporal and Stereoscopic Filtering for Real-time Ray Tracing(The Eurographics Association, 2023) Philippi, Henrik; Frisvad, Jeppe Revall; Jensen, Henrik Wann; Ritschel, Tobias; Weidlich, AndreaWe present a practical method for temporal and stereoscopic filtering that generates stereo-consistent rendering. Existing methods for stereoscopic rendering often reuse samples from one eye for the other or do averaging between the two eyes. These approaches fail in the presence of ray tracing effects such as specular reflections and refractions. We derive a new blending strategy that leverages variance to compute per pixel blending weights for both temporal and stereoscopic rendering. In the temporal domain, our method works well in a low noise context and is robust in the presence of inconsistent motion vectors, where existing methods such as temporal anti-aliasing (TAA) and deep learning super sampling (DLSS) produce artifacts. In the stereoscopic domain, our method provides a new way to ensure consistency between the left and right eyes. The stereoscopic version of our method can be used with our new temporal method or with existing methods such as DLSS and TAA. In all combinations, it reduces the error and significantly increases the consistency between the eyes making it practical for real-time settings such as virtual reality (VR).Item SparseBTF: Sparse Representation Learning for Bidirectional Texture Functions(The Eurographics Association, 2023) Kavoosighafi, Behnaz; Frisvad, Jeppe Revall; Hajisharif, Saghi; Unger, Jonas; Miandji, Ehsan; Ritschel, Tobias; Weidlich, AndreaWe propose a novel dictionary-based representation learning model for Bidirectional Texture Functions (BTFs) aiming at compact storage, real-time rendering performance, and high image quality. Our model is trained once, using a small training set, and then used to obtain a sparse tensor containing the model parameters. Our technique exploits redundancies in the data across all dimensions simultaneously, as opposed to existing methods that use only angular information and ignore correlations in the spatial domain. We show that our model admits efficient angular interpolation directly in the model space, rather than the BTF space, leading to a notably higher rendering speed than in previous work. Additionally, the high quality-storage cost tradeoff enabled by our method facilitates controlling the image quality, storage cost, and rendering speed using a single parameter, the number of coefficients. Previous methods rely on a fixed number of latent variables for training and testing, hence limiting the potential for achieving a favorable quality-storage cost tradeoff and scalability. Our experimental results demonstrate that our method outperforms existing methods both quantitatively and qualitatively, as well as achieving a higher compression ratio and rendering speed.