Browsing by Author "Frisvad, Jeppe Revall"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item Computing the Bidirectional Scattering of a Microstructure Using Scalar Diffraction Theory and Path Tracing(The Eurographics Association and John Wiley & Sons Ltd., 2020) Falster, Viggo; Jarabo, Adrián; Frisvad, Jeppe Revall; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueMost models for bidirectional surface scattering by arbitrary explicitly defined microgeometry are either based on geometric optics and include multiple scattering but no diffraction effects or based on wave optics and include diffraction but no multiple scattering effects. The few exceptions to this tendency are based on rigorous solution of Maxwell's equations and are computationally intractable for surface microgeometries that are tens or hundreds of microns wide. We set up a measurement equation for combining results from single scattering scalar diffraction theory with multiple scattering geometric optics using Monte Carlo integration. Since we consider an arbitrary surface microgeometry, our method enables us to compute expected bidirectional scattering of the metasurfaces with increasingly smaller details seen more and more often in production. In addition, we can take a measured microstructure as input and, for example, compute the difference in bidirectional scattering between a desired surface and a produced surface. In effect, our model can account for both diffraction colors due to wavelength-sized features in the microgeometry and brightening due to multiple scattering. We include scalar diffraction for refraction, and we verify that our model is reasonable by comparing with the rigorous solution for a microsurface with half ellipsoids.Item EUROGRAPHICS 2020: Tutorials Frontmatter(Eurographics Association, 2020) Fjeld, Morten; Frisvad, Jeppe Revall; Fjeld, Morten and Frisvad, Jeppe RevallItem Practical Temporal and Stereoscopic Filtering for Real-time Ray Tracing(The Eurographics Association, 2023) Philippi, Henrik; Frisvad, Jeppe Revall; Jensen, Henrik Wann; Ritschel, Tobias; Weidlich, AndreaWe present a practical method for temporal and stereoscopic filtering that generates stereo-consistent rendering. Existing methods for stereoscopic rendering often reuse samples from one eye for the other or do averaging between the two eyes. These approaches fail in the presence of ray tracing effects such as specular reflections and refractions. We derive a new blending strategy that leverages variance to compute per pixel blending weights for both temporal and stereoscopic rendering. In the temporal domain, our method works well in a low noise context and is robust in the presence of inconsistent motion vectors, where existing methods such as temporal anti-aliasing (TAA) and deep learning super sampling (DLSS) produce artifacts. In the stereoscopic domain, our method provides a new way to ensure consistency between the left and right eyes. The stereoscopic version of our method can be used with our new temporal method or with existing methods such as DLSS and TAA. In all combinations, it reduces the error and significantly increases the consistency between the eyes making it practical for real-time settings such as virtual reality (VR).Item Progressive Denoising of Monte Carlo Rendered Images(The Eurographics Association and John Wiley & Sons Ltd., 2022) Firmino, Arthur; Frisvad, Jeppe Revall; Jensen, Henrik Wann; Chaine, Raphaëlle; Kim, Min H.Image denoising based on deep learning has become a powerful tool to accelerate Monte Carlo rendering. Deep learning techniques can produce smooth images using a low sample count. Unfortunately, existing deep learning methods are biased and do not converge to the correct solution as the number of samples increase. In this paper, we propose a progressive denoising technique that aims to use denoising only when it is beneficial and to reduce its impact at high sample counts. We use Stein's unbiased risk estimate (SURE) to estimate the error in the denoised image, and we combine this with a neural network to infer a per-pixel mixing parameter. We further augment this network with confidence intervals based on classical statistics to ensure consistency and convergence of the final denoised image. Our results demonstrate that our method is consistent and that it improves existing denoising techniques. Furthermore, it can be used in combination with existing high quality denoisers to ensure consistency. In addition to being asymptotically unbiased, progressive denoising is particularly good at preserving fine details that would otherwise be lost with existing denoisers.Item Rendering Glinty Granular Materials in Virtual Reality(The Eurographics Association, 2022) Kajs, Nynne; Gjøl, Mikkel; Gath, Jakob; Philippi, Henrik; Frisvad, Jeppe Revall; Bærentzen, Jakob Andreas; Theophilus Teo; Ryota KondoHighly realistic rendering of grainy materials like sand is achievable given significant computational resources and a lot of time for the rendering of each frame. In an interactive virtual environment, we cannot afford such luxuries. Frame rates must be kept high and precomputation should be kept at a level that does not limit the interactivity. We propose a system for editable procedural generation of sand appearance and demonstrate interactive virtual reality (VR) inspection of the generated sand under different skies. Our method enables stable real-time rendering of the glinty appearance that granular materials exhibit as a function of observer distance. This enables simultaneous nearby and distant inspection of the material.Item SparseBTF: Sparse Representation Learning for Bidirectional Texture Functions(The Eurographics Association, 2023) Kavoosighafi, Behnaz; Frisvad, Jeppe Revall; Hajisharif, Saghi; Unger, Jonas; Miandji, Ehsan; Ritschel, Tobias; Weidlich, AndreaWe propose a novel dictionary-based representation learning model for Bidirectional Texture Functions (BTFs) aiming at compact storage, real-time rendering performance, and high image quality. Our model is trained once, using a small training set, and then used to obtain a sparse tensor containing the model parameters. Our technique exploits redundancies in the data across all dimensions simultaneously, as opposed to existing methods that use only angular information and ignore correlations in the spatial domain. We show that our model admits efficient angular interpolation directly in the model space, rather than the BTF space, leading to a notably higher rendering speed than in previous work. Additionally, the high quality-storage cost tradeoff enabled by our method facilitates controlling the image quality, storage cost, and rendering speed using a single parameter, the number of coefficients. Previous methods rely on a fixed number of latent variables for training and testing, hence limiting the potential for achieving a favorable quality-storage cost tradeoff and scalability. Our experimental results demonstrate that our method outperforms existing methods both quantitatively and qualitatively, as well as achieving a higher compression ratio and rendering speed.Item Survey of Models for Acquiring the Optical Properties of Translucent Materials(The Eurographics Association and John Wiley & Sons Ltd., 2020) Frisvad, Jeppe Revall; Jensen, Søren Alkærsig; Madsen, Jonas Skovlund; Correia, António; Yang, Li; Gregersen, Søren K. S.; Meuret, Youri; Hansen, Poul-Erik; Mantiuk, Rafal and Sundstedt, VeronicaThe outset of realistic rendering is a desire to reproduce the appearance of the real world. Rendering techniques therefore operate at a scale corresponding to the size of objects that we observe with our naked eyes. At the same time, rendering techniques must be able to deal with objects of nearly arbitrary shapes and materials. These requirements lead to techniques that oftentimes leave the task of setting the optical properties of the materials to the user. Matching the appearance of real objects by manual adjustment of optical properties is however nearly impossible. We can render objects with a plausible appearance in this way but cannot compare the appearance of a manufactured item to that of its digital twin. This is especially true in the case of translucent objects, where we need more than a goniometric measurement of the optical properties. In this survey, we provide an overview of forward and inverse models for acquiring the optical properties of translucent materials. We map out the efforts in graphics research in this area and describe techniques available in related fields. Our objective is to provide a better understanding of the tools currently available for appearance specification when it comes to digital representations of real translucent objects.Item Tools for Virtual Reality Visualization of Highly Detailed Meshes(The Eurographics Association, 2021) Jensen, Mark B.; Jacobsen, Egill I.; Frisvad, Jeppe Revall; Bærentzen, J. Andreas; Gillmann, Christina and Krone, Michael and Reina, Guido and Wischgoll, ThomasThe number of polygons in meshes acquired using 3D scanning or by computational methods for shape generation is rapidly increasing. With this growing complexity of geometric models, new visualization modalities need to be explored for more effortless and intuitive inspection and analysis. Virtual reality (VR) is a step in this direction but comes at the cost of a tighter performance budget. In this paper, we explore different starting points for achieving high performance when visualizing large meshes in virtual reality. We explore two rendering pipelines and mesh optimization algorithms and find that a mesh shading pipeline shows great promise when compared to a normal vertex shading pipeline.We also test the VR performance of commonly used visualization tools (ParaView and Unity) and ray tracing running on the graphics processing unit (GPU). Finally, we find that mesh pre-processing is important to performance and that the specific type of pre-processing needed depends intricately on the choice of rendering pipeline.