Browsing by Author "Ritschel, Tobias"
Now showing 1 - 14 of 14
Results Per Page
Sort Options
Item Blue Noise Plots(The Eurographics Association and John Wiley & Sons Ltd., 2021) Onzenoodt, Christian van; Singh, Gurprit; Ropinski, Timo; Ritschel, Tobias; Mitra, Niloy and Viola, IvanWe propose Blue Noise Plots, two-dimensional dot plots that depict data points of univariate data sets. While often onedimensional strip plots are used to depict such data, one of their main problems is visual clutter which results from overlap. To reduce this overlap, jitter plots were introduced, whereby an additional, non-encoding plot dimension is introduced, along which the data point representing dots are randomly perturbed. Unfortunately, this randomness can suggest non-existent clusters, and often leads to visually unappealing plots, in which overlap might still occur. To overcome these shortcomings, we introduce Blue Noise Plots where random jitter along the non-encoding plot dimension is replaced by optimizing all dots to keep a minimum distance in 2D i. e., Blue Noise. We evaluate the effectiveness as well as the aesthetics of Blue Noise Plots through both, a quantitative and a qualitative user study. The Python implementation of Blue Noise Plots is available here.Item Deep-learning the Latent Space of Light Transport(The Eurographics Association and John Wiley & Sons Ltd., 2019) Hermosilla, Pedro; Maisch, Sebastian; Ritschel, Tobias; Ropinski, Timo; Boubekeur, Tamy and Sen, PradeepWe suggest a method to directly deep-learn light transport, i. e., the mapping from a 3D geometry-illumination-material configuration to a shaded 2D image. While many previous learning methods have employed 2D convolutional neural networks applied to images, we show for the first time that light transport can be learned directly in 3D. The benefit of 3D over 2D is, that the former can also correctly capture illumination effects related to occluded and/or semi-transparent geometry. To learn 3D light transport, we represent the 3D scene as an unstructured 3D point cloud, which is later, during rendering, projected to the 2D output image. Thus, we suggest a two-stage operator comprising a 3D network that first transforms the point cloud into a latent representation, which is later on projected to the 2D output image using a dedicated 3D-2D network in a second step. We will show that our approach results in improved quality in terms of temporal coherence while retaining most of the computational efficiency of common 2D methods. As a consequence, the proposed two stage-operator serves as a valuable extension to modern deferred shading approaches.Item Distortion-Free Displacement Mapping(The Eurographics Association and John Wiley & Sons Ltd., 2019) Zirr, Tobias; Ritschel, Tobias; Steinberger, Markus and Foley, TimDisplacement mapping is routinely used to add geometric details in a fast and easy-to-control way, both in offline rendering as well as recently in interactive applications such as games. However, it went largely unnoticed (with the exception of McGuire and Whitson [MW08]) that, when applying displacement mapping to a surface with a low-distortion parametrization, this parametrization is distorted as the geometry was changed by the displacement mapping. Typical resulting artifacts are ''rubber band''-like distortion patterns in areas of strong displacement change where a small isotropic area in texture space is mapped to a large anisotropic area in world space. We describe a fast, fully GPU-based two-step procedure to resolve this problem. First, a correction deformation is computed from the displacement map. Second, two variants to apply this correction when computing displacement mapping are proposed. The first variant is backward-compatible and can resolve the artifact in any rendering pipeline without modifying it and without requiring additional computation at render time, but only works for bijective parametrizations. The second variant works for more general parametrizations, but requires to modify the rendering code and incurs a very small computational overhead.Item EUROGRAPHICS 2018: Tutorials Frontmatter(Eurographics Association, 2018) Ritschel, Tobias; Telea, Alexandru; Ritschel, Tobias; Telea, AlexandruItem EUROGRAPHICS 2020: Posters Frontmatter(Eurographics Association, 2020) Ritschel, Tobias; Eilertsen, Gabriel; Ritschel, Tobias and Eilertsen, GabrielItem High Performance Graphics 2021 CGF 40-8: Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2021) Binder, Nikolaus; Ritschel, Tobias; Binder, Nikolaus and Ritschel, TobiasItem High-Performance Graphics 2021 – Symposium Papers: Frontmatter(Eurographics Association, 2021) Binder, Nikolaus; Ritschel, Tobias; Binder, Nikolaus and Ritschel, TobiasItem Learning to Learn and Sample BRDFs(The Eurographics Association and John Wiley & Sons Ltd., 2023) Liu, Chen; Fischer, Michael; Ritschel, Tobias; Myszkowski, Karol; Niessner, MatthiasWe propose a method to accelerate the joint process of physically acquiring and learning neural Bi-directional Reflectance Distribution Function (BRDF) models. While BRDF learning alone can be accelerated by meta-learning, acquisition remains slow as it relies on a mechanical process. We show that meta-learning can be extended to optimize the physical sampling pattern, too. After our method has been meta-trained for a set of fully-sampled BRDFs, it is able to quickly train on new BRDFs with up to five orders of magnitude fewer physical acquisition samples at similar quality. Our approach also extends to other linear and non-linear BRDF models, which we show in an extensive evaluation.Item Learning to Predict Image-based Rendering Artifacts with Respect to a Hidden Reference Image(The Eurographics Association and John Wiley & Sons Ltd., 2019) Bemana, Mojtaba; Keinert, Joachim; Myszkowski, Karol; Bätz, Michel; Ziegler, Matthias; Seidel, Hans-Peter; Ritschel, Tobias; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonImage metrics predict the perceived per-pixel difference between a reference image and its degraded (e. g., re-rendered) version. In several important applications, the reference image is not available and image metrics cannot be applied. We devise a neural network architecture and training procedure that allows predicting the MSE, SSIM or VGG16 image difference from the distorted image alone while the reference is not observed. This is enabled by two insights: The first is to inject sufficiently many un-distorted natural image patches, which can be found in arbitrary amounts and are known to have no perceivable difference to themselves. This avoids false positives. The second is to balance the learning, where it is carefully made sure that all image errors are equally likely, avoiding false negatives. Surprisingly, we observe that the resulting no-reference metric, subjectively, can even perform better than the reference-based one, as it had to become robust against mis-alignments. We evaluate the effectiveness of our approach in an image-based rendering context, both quantitatively and qualitatively. Finally, we demonstrate two applications which reduce light field capture time and provide guidance for interactive depth adjustment.Item Neural BRDF Representation and Importance Sampling(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Sztrajman, Alejandro; Rainer, Gilles; Ritschel, Tobias; Weyrich, Tim; Benes, Bedrich and Hauser, HelwigControlled capture of real‐world material appearance yields tabulated sets of highly realistic reflectance data. In practice, however, its high memory footprint requires compressing into a representation that can be used efficiently in rendering while remaining faithful to the original. Previous works in appearance encoding often prioritized one of these requirements at the expense of the other, by either applying high‐fidelity array compression strategies not suited for efficient queries during rendering, or by fitting a compact analytic model that lacks expressiveness. We present a compact neural network‐based representation of BRDF data that combines high‐accuracy reconstruction with efficient practical rendering via built‐in interpolation of reflectance. We encode BRDFs as lightweight networks, and propose a training scheme with adaptive angular sampling, critical for the accurate reconstruction of specular highlights. Additionally, we propose a novel approach to make our representation amenable to importance sampling: rather than inverting the trained networks, we learn to encode them in a more compact embedding that can be mapped to parameters of an analytic BRDF for which importance sampling is known. We evaluate encoding results on isotropic and anisotropic BRDFs from multiple real‐world datasets, and importance sampling performance for isotropic BRDFs mapped to two different analytic models.Item Neural Precomputed Radiance Transfer(The Eurographics Association and John Wiley & Sons Ltd., 2022) Rainer, Gilles; Bousseau, Adrien; Ritschel, Tobias; Drettakis, George; Chaine, Raphaëlle; Kim, Min H.Recent advances in neural rendering indicate immense promise for architectures that learn light transport, allowing efficient rendering of global illumination effects once such methods are trained. The training phase of these methods can be seen as a form of pre-computation, which has a long standing history in Computer Graphics. In particular, Pre-computed Radiance Transfer (PRT) achieves real-time rendering by freezing some variables of the scene (geometry, materials) and encoding the distribution of others, allowing interactive rendering at runtime. We adopt the same configuration as PRT - global illumination of static scenes under dynamic environment lighting - and investigate different neural network architectures, inspired by the design principles and theoretical analysis of PRT. We introduce four different architectures, and show that those based on knowledge of light transport models and PRT-inspired principles improve the quality of global illumination predictions at equal training time and network size, without the need for high-end ray-tracing hardware.Item OutCast: Outdoor Single-image Relighting with Cast Shadows(The Eurographics Association and John Wiley & Sons Ltd., 2022) Griffiths, David; Ritschel, Tobias; Philip, Julien; Chaine, Raphaëlle; Kim, Min H.We propose a relighting method for outdoor images. Our method mainly focuses on predicting cast shadows in arbitrary novel lighting directions from a single image while also accounting for shading and global effects such the sun light color and clouds. Previous solutions for this problem rely on reconstructing occluder geometry, e. g., using multi-view stereo, which requires many images of the scene. Instead, in this work we make use of a noisy off-the-shelf single-image depth map estimation as a source of geometry. Whilst this can be a good guide for some lighting effects, the resulting depth map quality is insufficient for directly ray-tracing the shadows. Addressing this, we propose a learned image space ray-marching layer that converts the approximate depth map into a deep 3D representation that is fused into occlusion queries using a learned traversal. Our proposed method achieves, for the first time, state-of-the-art relighting results, with only a single image as input.Item Rendering 2023 CGF 42-4: Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2023) Ritschel, Tobias; Weidlich, Andrea; Ritschel, Tobias; Weidlich, AndreaItem Rendering 2023 Symposium Track: Frontmatter(The Eurographics Association, 2023) Ritschel, Tobias; Weidlich, Andrea; Ritschel, Tobias; Weidlich, Andrea