Rendering 2023 - Symposium Track
Permanent URI for this collection
Browse
Browsing Rendering 2023 - Symposium Track by Subject "Rendering"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Fast Procedural Noise By Monte Carlo Sampling(The Eurographics Association, 2023) Fajardo, Marcos; Pharr, Matt; Ritschel, Tobias; Weidlich, AndreaProcedural noise functions are widely used in computer graphics as a way to add texture detail to surfaces and volumes. Many noise functions are based on weighted sums that can be expressed in terms of random variables, which makes it possible to compute Monte Carlo estimates of their values at lower cost. Such stochastic noise functions fit naturally into many Monte Carlo estimators already used in rendering. Leveraging the dense image-plane sampling in modern path tracing renderers, we show that stochastic evaluation allows the use of procedural noise at a fraction of its full cost with little additional error.Item Floaters No More: Radiance Field Gradient Scaling for Improved Near-Camera Training(The Eurographics Association, 2023) Philip, Julien; Deschaintre, Valentin; Ritschel, Tobias; Weidlich, AndreaNeRF acquisition typically requires careful choice of near planes for the different cameras or suffers from background collapse, creating floating artifacts on the edges of the captured scene. The key insight of this work is that background collapse is caused by a higher density of samples in regions near cameras. As a result of this sampling imbalance, near-camera volumes receive significantly more gradients, leading to incorrect density buildup. We propose a gradient scaling approach to counter-balance this sampling imbalance, removing the need for near planes, while preventing background collapse. Our method can be implemented in a few lines, does not induce any significant overhead, and is compatible with most NeRF implementations.Item Practical Temporal and Stereoscopic Filtering for Real-time Ray Tracing(The Eurographics Association, 2023) Philippi, Henrik; Frisvad, Jeppe Revall; Jensen, Henrik Wann; Ritschel, Tobias; Weidlich, AndreaWe present a practical method for temporal and stereoscopic filtering that generates stereo-consistent rendering. Existing methods for stereoscopic rendering often reuse samples from one eye for the other or do averaging between the two eyes. These approaches fail in the presence of ray tracing effects such as specular reflections and refractions. We derive a new blending strategy that leverages variance to compute per pixel blending weights for both temporal and stereoscopic rendering. In the temporal domain, our method works well in a low noise context and is robust in the presence of inconsistent motion vectors, where existing methods such as temporal anti-aliasing (TAA) and deep learning super sampling (DLSS) produce artifacts. In the stereoscopic domain, our method provides a new way to ensure consistency between the left and right eyes. The stereoscopic version of our method can be used with our new temporal method or with existing methods such as DLSS and TAA. In all combinations, it reduces the error and significantly increases the consistency between the eyes making it practical for real-time settings such as virtual reality (VR).Item SparseBTF: Sparse Representation Learning for Bidirectional Texture Functions(The Eurographics Association, 2023) Kavoosighafi, Behnaz; Frisvad, Jeppe Revall; Hajisharif, Saghi; Unger, Jonas; Miandji, Ehsan; Ritschel, Tobias; Weidlich, AndreaWe propose a novel dictionary-based representation learning model for Bidirectional Texture Functions (BTFs) aiming at compact storage, real-time rendering performance, and high image quality. Our model is trained once, using a small training set, and then used to obtain a sparse tensor containing the model parameters. Our technique exploits redundancies in the data across all dimensions simultaneously, as opposed to existing methods that use only angular information and ignore correlations in the spatial domain. We show that our model admits efficient angular interpolation directly in the model space, rather than the BTF space, leading to a notably higher rendering speed than in previous work. Additionally, the high quality-storage cost tradeoff enabled by our method facilitates controlling the image quality, storage cost, and rendering speed using a single parameter, the number of coefficients. Previous methods rely on a fixed number of latent variables for training and testing, hence limiting the potential for achieving a favorable quality-storage cost tradeoff and scalability. Our experimental results demonstrate that our method outperforms existing methods both quantitatively and qualitatively, as well as achieving a higher compression ratio and rendering speed.