Browsing by Author "Hilton, Adrian"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Deep Neural Models for Illumination Estimation and Relighting: A Survey(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Einabadi, Farshad; Guillemaut, Jean‐Yves; Hilton, Adrian; Benes, Bedrich and Hauser, HelwigScene relighting and estimating illumination of a real scene for insertion of virtual objects in a mixed‐reality scenario are well‐studied challenges in the computer vision and graphics fields. Classical inverse rendering approaches aim to decompose a scene into its orthogonal constituting elements, namely scene geometry, illumination and surface materials, which can later be used for augmented reality or to render new images under novel lighting or viewpoints. Recently, the application of deep neural computing to illumination estimation, relighting and inverse rendering has shown promising results. This contribution aims to bring together in a coherent manner current advances in this conjunction. We examine in detail the attributes of the proposed approaches, presented in three categories: scene illumination estimation, relighting with reflectance‐aware scene‐specific representations and finally relighting as image‐to‐image transformations. Each category is concluded with a discussion on the main characteristics of the current methods and possible future trends. We also provide an overview of current publicly available datasets for neural lighting applications.Item Learning Projective Shadow Textures for Neural Rendering of Human Cast Shadows from Silhouettes(The Eurographics Association, 2023) Einabadi, Farshad; Guillemaut, Jean-Yves; Hilton, Adrian; Ritschel, Tobias; Weidlich, AndreaThis contribution introduces a two-step, novel neural rendering framework to learn the transformation from a 2D human silhouette mask to the corresponding cast shadows on background scene geometries. In the first step, the proposed neural renderer learns a binary shadow texture (canonical shadow) from the 2D foreground subject, for each point light source, independent of the background scene geometry. Next, the generated binary shadows are texture-mapped to transparent virtual shadow map planes which are seamlessly used in a traditional rendering pipeline to project hard or soft shadows for arbitrary scenes and light sources of different sizes. The neural renderer is trained with shadow images rendered from a fast, scalable, synthetic data generation framework. We introduce the 3D Virtual Human Shadow (3DVHshadow) dataset as a public benchmark for training and evaluation of human shadow generation. Evaluation on the 3DVHshadow test set and real 2D silhouette images of people demonstrates the proposed framework achieves comparable performance to traditional geometry-based renderers without any requirement for knowledge or computationally intensive, explicit estimation of the 3D human shape. We also show the benefit of learning intermediate canonical shadow textures, compared to learning to generate shadows directly in camera image space. Further experiments are provided to evaluate the effect of having multiple light sources in the scene, model performance with regard to the relative camera-light 2D angular distance, potential aliasing artefacts related to output image resolution, and effect of light sources' dimensions on shadow softness.