EGWR: Eurographics Workshop on Rendering
Permanent URI for this community
Browse
Browsing EGWR: Eurographics Workshop on Rendering by Subject "based rendering"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Deep Flow Rendering: View Synthesis via Layer-aware Reflection Flow(The Eurographics Association and John Wiley & Sons Ltd., 2022) Dai, Pinxuan; Xie, Ning; Ghosh, Abhijeet; Wei, Li-YiNovel view synthesis (NVS) generates images from unseen viewpoints based on a set of input images. It is a challenge because of inaccurate lighting optimization and geometry inference. Although current neural rendering methods have made significant progress, they still struggle to reconstruct global illumination effects like reflections and exhibit ambiguous blurs in highly viewdependent areas. This work addresses high-quality view synthesis to emphasize reflection on non-concave surfaces. We propose Deep Flow Rendering that optimizes direct and indirect lighting separately, leveraging texture mapping, appearance flow, and neural rendering. A learnable texture is used to predict view-independent features, meanwhile enabling efficient reflection extraction. To accurately fit view-dependent effects, we adopt a constrained neural flow to transfer image-space features from nearby views to the target view in an edge-preserving manner. Then we further implement a fusing renderer that utilizes the predictions of both layers to form the output image. The experiments demonstrate that our method outperforms the state-of-theart methods at synthesizing various scenes with challenging reflection effects.Item Exploiting Repetitions for Image-Based Rendering of Facades(The Eurographics Association and John Wiley & Sons Ltd., 2018) Rodriguez, Simon; Bousseau, Adrien; Durand, Fredo; Drettakis, George; Jakob, Wenzel and Hachisuka, ToshiyaStreet-level imagery is now abundant but does not have sufficient capture density to be usable for Image-Based Rendering (IBR) of facades. We present a method that exploits repetitive elements in facades - such as windows - to perform data augmentation, in turn improving camera calibration, reconstructed geometry and overall rendering quality for IBR. The main intuition behind our approach is that a few views of several instances of an element provide similar information to many views of a single instance of that element. We first select similar instances of an element from 3-4 views of a facade and transform them into a common coordinate system, creating a ''platonic'' element. We use this common space to refine the camera calibration of each view of each instance and to reconstruct a 3D mesh of the element with multi-view stereo, that we regularize to obtain a piecewise-planar mesh aligned with dominant image contours. Observing the same element under multiple views also allows us to identify reflective areas - such as glass panels - which we use at rendering time to generate plausible reflections using an environment map. Our detailed 3D mesh, augmented set of views, and reflection mask enable image-based rendering of much higher quality than results obtained using the input images directly.Item Interactive Control over Temporal Consistency while Stylizing Video Streams(The Eurographics Association and John Wiley & Sons Ltd., 2023) Shekhar, Sumit; Reimann, Max; Hilscher, Moritz; Semmo, Amir; Döllner, Jürgen; Trapp, Matthias; Ritschel, Tobias; Weidlich, AndreaImage stylization has seen significant advancement and widespread interest over the years, leading to the development of a multitude of techniques. Extending these stylization techniques, such as Neural Style Transfer (NST), to videos is often achieved by applying them on a per-frame basis. However, per-frame stylization usually lacks temporal consistency, expressed by undesirable flickering artifacts. Most of the existing approaches for enforcing temporal consistency suffer from one or more of the following drawbacks: They (1) are only suitable for a limited range of techniques, (2) do not support online processing as they require the complete video as input, (3) cannot provide consistency for the task of stylization, or (4) do not provide interactive consistency control. Domain-agnostic techniques for temporal consistency aim to eradicate flickering completely but typically disregard aesthetic aspects. For stylization tasks, however, consistency control is an essential requirement as a certain amount of flickering adds to the artistic look and feel. Moreover, making this control interactive is paramount from a usability perspective. To achieve the above requirements, we propose an approach that stylizes video streams in real-time at full HD resolutions while providing interactive consistency control. We develop a lite optical-flow network that operates at 80 Frames per second (FPS) on desktop systems with sufficient accuracy. Further, we employ an adaptive combination of local and global consistency features and enable interactive selection between them. Objective and subjective evaluations demonstrate that our method is superior to state-of-the-art video consistency approaches. maxreimann.github.io/stream-consistencyItem NEnv: Neural Environment Maps for Global Illumination(The Eurographics Association and John Wiley & Sons Ltd., 2023) Rodriguez-Pardo, Carlos; Fabre, Javier; Garces, Elena; Lopez-Moreno, Jorge; Ritschel, Tobias; Weidlich, AndreaEnvironment maps are commonly used to represent and compute far-field illumination in virtual scenes. However, they are expensive to evaluate and sample from, limiting their applicability to real-time rendering. Previous methods have focused on compression through spherical-domain approximations, or on learning priors for natural, day-light illumination. These hinder both accuracy and generality, and do not provide the probability information required for importance-sampling Monte Carlo integration. We propose NEnv, a deep-learning fully-differentiable method, capable of compressing and learning to sample from a single environment map. NEnv is composed of two different neural networks: A normalizing flow, able to map samples from uniform distributions to the probability density of the illumination, also providing their corresponding probabilities; and an implicit neural representation which compresses the environment map into an efficient differentiable function. The computation time of environment samples with NEnv is two orders of magnitude less than with traditional methods. NEnv makes no assumptions regarding the content (i.e. natural illumination), thus achieving higher generality than previous learning-based approaches. We share our implementation and a diverse dataset of trained neural environment maps, which can be easily integrated into existing rendering engines.Item PVP: Personalized Video Prior for Editable Dynamic Portraits using StyleGAN(The Eurographics Association and John Wiley & Sons Ltd., 2023) Lin, Kai-En; Trevithick, Alex; Cheng, Keli; Sarkis, Michel; Ghafoorian, Mohsen; Bi, Ning; Reitmayr, Gerhard; Ramamoorthi, Ravi; Ritschel, Tobias; Weidlich, AndreaPortrait synthesis creates realistic digital avatars which enable users to interact with others in a compelling way. Recent advances in StyleGAN and its extensions have shown promising results in synthesizing photorealistic and accurate reconstruction of human faces. However, previous methods often focus on frontal face synthesis and most methods are not able to handle large head rotations due to the training data distribution of StyleGAN. In this work, our goal is to take as input a monocular video of a face, and create an editable dynamic portrait able to handle extreme head poses. The user can create novel viewpoints, edit the appearance, and animate the face. Our method utilizes pivotal tuning inversion (PTI) to learn a personalized video prior from a monocular video sequence. Then we can input pose and expression coefficients to MLPs and manipulate the latent vectors to synthesize different viewpoints and expressions of the subject. We also propose novel loss functions to further disentangle pose and expression in the latent space. Our algorithm shows much better performance over previous approaches on monocular video datasets, and it is also capable of running in real-time at 54 FPS on an RTX 3080.Item Tessellated Shading Streaming(The Eurographics Association and John Wiley & Sons Ltd., 2019) Hladky, Jozef; Seidel, Hans-Peter; Steinberger, Markus; Boubekeur, Tamy and Sen, PradeepPresenting high-fidelity 3D content on compact portable devices with low computational power is challenging. Smartphones, tablets and head-mounted displays (HMDs) suffer from thermal and battery-life constraints and thus cannot match the render quality of desktop PCs and laptops. Streaming rendering enables to show high-quality content but can suffer from potentially high latency. We propose an approach to efficiently capture shading samples in object space and packing them into a texture. Streaming this texture to the client, we support temporal frame up-sampling with high fidelity, low latency and high mobility. We introduce two novel sample distribution strategies and a novel triangle representation in the shading atlas space. Since such a system requires dynamic parallelism, we propose an implementation exploiting the power of hardware-accelerated tessellation stages. Our approach allows fast de-coding and rendering of extrapolated views on a client device by using hardwareaccelerated interpolation between shading samples and a set of potentially visible geometry. A comparison to existing shading methods shows that our sample distributions allow better client shading quality than previous atlas streaming approaches and outperforms image-based methods in all relevant aspects.