Rendering - Experimental Ideas & Implementations 2018
Permanent URI for this collection
Browse
Browsing Rendering - Experimental Ideas & Implementations 2018 by Subject "Image processing"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Deep Hybrid Real and Synthetic Training for Intrinsic Decomposition(The Eurographics Association, 2018) Bi, Sai; Kalantari, Nima Khademi; Ramamoorthi, Ravi; Jakob, Wenzel and Hachisuka, ToshiyaIntrinsic image decomposition is the process of separating the reflectance and shading layers of an image, which is a challenging and underdetermined problem. In this paper, we propose to systematically address this problem using a deep convolutional neural network (CNN). Although deep learning (DL) has been recently used to handle this application, the current DL methods train the network only on synthetic images as obtaining ground truth reflectance and shading for real images is difficult. Therefore, these methods fail to produce reasonable results on real images and often perform worse than the non-DL techniques. We overcome this limitation by proposing a novel hybrid approach to train our network on both synthetic and real images. Specifically, in addition to directly supervising the network using synthetic images, we train the network by enforcing it to produce the same reflectance for a pair of images of the same real-world scene with different illuminations. Furthermore, we improve the results by incorporating a bilateral solver layer into our system during both training and test stages. Experimental results show that our approach produces better results than the state-of-the-art DL and non-DL methods on various synthetic and real datasets both visually and numerically.Item Screen Space Approximate Gaussian Hulls(The Eurographics Association, 2018) Meder, Julian; Brüderlin, Beat; Jakob, Wenzel and Hachisuka, ToshiyaThe Screen Space Approximate Gaussian Hull method presented in this paper is based on an output sensitive, adaptive approach, which addresses the challenge of high quality rendering even for high resolution displays and large numbers of light sources or indirect lighting. Our approach uses dynamically sparse sampling of the light information on a low-resolution mesh approximated from screen space and applying these samples in a deferred shading stage to the full resolution image. This preserves geometric detail unlike common approaches using lower resolution rendering combined with upsampling strategies. The light samples are expressed by spherical Gaussian distribution functions, for which we found a more precise closed form integration compared to existing approaches. Thus, our method does not exhibit the quality degradation shown by previously proposed approaches and we show that the implementation is very efficient. Moreover, being an output sensitive approach, it can be used for massive scene rendering without additional cost.