Browsing by Author "Holzschuch, Nicolas"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Adaptive Matrix Completion for Fast Visibility Computations with Many Lights Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2020) Wang, Sunrise; Holzschuch, Nicolas; Dachsbacher, Carsten and Pharr, MattSeveral fast global illumination algorithms rely on the Virtual Point Lights framework. This framework separates illumination into two steps: first, propagate radiance in the scene and store it in virtual lights, then gather illumination from these virtual lights. To accelerate the second step, virtual lights and receiving points are grouped hierarchically, for example using Multi- Dimensional Lightcuts. Computing visibility between clusters of virtual lights and receiving points is a bottleneck. Separately, matrix completion algorithms reconstruct completely a low-rank matrix from an incomplete set of sampled elements. In this paper, we use adaptive matrix completion to approximate visibility information after an initial clustering step. We reconstruct visibility information using as little as 10%to 20%samples for most scenes, and combine it with shading information computed separately, in parallel on the GPU. Overall, our method computes global illumination 3 or more times faster than previous stateof- the-art methods.Item Fast Global Illumination with Discrete Stochastic Microfacets Using a Filterable Model(The Eurographics Association and John Wiley & Sons Ltd., 2018) Wang, Beibei; Wang, Lu; Holzschuch, Nicolas; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesMany real-life materials have a sparkling appearance, whether by design or by nature. Examples include metallic paints, sparkling varnish but also snow. These sparkles correspond to small, isolated, shiny particles reflecting light in a specific direction, on the surface or embedded inside the material. The particles responsible for these sparkles are usually small and discontinuous. These characteristics make it diffcult to integrate them effciently in a standard rendering pipeline, especially for indirect illumination. Existing approaches use a 4-dimensional hierarchy, searching for light-reflecting particles simultaneously in space and direction. The approach is accurate, but still expensive. In this paper, we show that this 4-dimensional search can be approximated using separate 2-dimensional steps. This approximation allows fast integration of glint contributions for large footprints, reducing the extra cost associated with glints be an order of magnitude.Item Joint SVBRDF Recovery and Synthesis From a Single Image using an Unsupervised Generative Adversarial Network(The Eurographics Association, 2020) Zhao, Yezi; Wang, Beibei; Xu, Yanning; Zeng, Zheng; Wang, Lu; Holzschuch, Nicolas; Dachsbacher, Carsten and Pharr, MattWe want to recreate spatially-varying bi-directional reflectance distribution functions (SVBRDFs) from a single image. Pro- ducing these SVBRDFs from single images will allow designers to incorporate many new materials in their virtual scenes, increasing their realism. A single image contains incomplete information about the SVBRDF, making reconstruction difficult. Existing algorithms can produce high-quality SVBRDFs with single or few input photographs using supervised deep learning. The learning step relies on a huge dataset with both input photographs and the ground truth SVBRDF maps. This is a weakness as ground truth maps are not easy to acquire. For practical use, it is also important to produce large SVBRDF maps. Existing algorithms rely on a separate texture synthesis step to generate these large maps, which leads to the loss of consistency be- tween generated SVBRDF maps. In this paper, we address both issues simultaneously. We present an unsupervised generative adversarial neural network that addresses both SVBRDF capture from a single image and synthesis at the same time. From a low-resolution input image, we generate a large resolution SVBRDF, much larger than the input images. We train a generative adversarial network (GAN) to get SVBRDF maps, which have both a large spatial extent and detailed texels. We employ a two-stream generator that divides the training of maps into two groups (normal and roughness as one, diffuse and specular as the other) to better optimize those four maps. In the end, our method is able to generate high-quality large scale SVBRDF maps from a single input photograph with repetitive structures and provides higher quality rendering results with more details compared to the previous works. Each input for our method requires individual training, which costs about 3 hours.Item Real‐Time Glints Rendering With Pre‐Filtered Discrete Stochastic Microfacets(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Wang, Beibei; Deng, Hong; Holzschuch, Nicolas; Benes, Bedrich and Hauser, HelwigMany real‐life materials have a sparkling appearance. Examples include metallic paints, sparkling fabrics and snow. Simulating these sparkles is important for realistic rendering but expensive. As sparkles come from small shiny particles reflecting light into a specific direction, they are very challenging for illumination simulation. Existing approaches use a four‐dimensional hierarchy, searching for light‐reflecting particles simultaneously in space and direction. The approach is accurate, but extremely expensive. A separable model is much faster, but still not suitable for real‐time applications. The performance problem is even worse when illumination comes from environment maps, as they require either a large sample count per pixel or pre‐filtering. Pre‐filtering is incompatible with the existing sparkle models, due to the discrete multi‐scale representation. In this paper, we present a GPU‐friendly, pre‐filtered model for real‐time simulation of sparkles and glints. Our method simulates glints under both environment maps and point light sources in real time, with an added cost of just 10 ms per frame with full high‐definition resolution. Editing material properties requires extra computations but is still real time, with an added cost of 10 ms per frame.Item Rendering Transparent Materials with a Complex Refractive Index: Semi-conductor and Conductor Thin Layers(The Eurographics Association, 2019) Gerardin, Morgane; Holzschuch, Nicolas; Martinetto, Pauline; Klein, Reinhard and Rushmeier, HollyDuring physical simulation of light transport, we separate materials between conductors and dielectrics. The former have a complex refractive index and are treated as opaque, the latter a real one and are treated as transparent. However, thin layers with a complex refractive index can become transparent if their thickness is small compared to the extinction coeffcient. This happens with thin metallic layers, but also with many pigments that are semiconductors: their extinction coeffcient (the imaginary part of their refractive index) is close to zero for part of the visible spectrum. Spectral effects inside these thin layers (attenuation and interference) result in dramatic color changes.Item SVBRDF Recovery from a Single Image with Highlights Using a Pre‐trained Generative Adversarial Network(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Wen, Tao; Wang, Beibei; Zhang, Lei; Guo, Jie; Holzschuch, Nicolas; Hauser, Helwig and Alliez, PierreSpatially varying bi‐directional reflectance distribution functions (SVBRDFs) are crucial for designers to incorporate new materials in virtual scenes, making them look more realistic. Reconstruction of SVBRDFs is a long‐standing problem. Existing methods either rely on an extensive acquisition system or require huge datasets, which are non‐trivial to acquire. We aim to recover SVBRDFs from a single image, without any datasets. A single image contains incomplete information about the SVBRDF, making the reconstruction task highly ill‐posed. It is also difficult to separate between the changes in colour that are caused by the material and those caused by the illumination, without the prior knowledge learned from the dataset. In this paper, we use an unsupervised generative adversarial neural network (GAN) to recover SVBRDFs maps with a single image as input. To better separate the effects due to illumination from the effects due to the material, we add the hypothesis that the material is stationary and introduce a new loss function based on Fourier coefficients to enforce this stationarity. For efficiency, we train the network in two stages: reusing a trained model to initialize the SVBRDFs and fine‐tune it based on the input image. Our method generates high‐quality SVBRDFs maps from a single input photograph, and provides more vivid rendering results compared to the previous work. The two‐stage training boosts runtime performance, making it eight times faster than the previous work.