Rendering 2024 - Symposium Track
Permanent URI for this collection
Browse
Browsing Rendering 2024 - Symposium Track by Subject "based rendering"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item High Quality Neural Relighting using Practical Zonal Illumination(The Eurographics Association, 2024) Lin, Arvin; Lin, Yiming; Li, Xiaohui; Ghosh, Abhijeet; Haines, Eric; Garces, ElenaWe present a method for high-quality image-based relighting using a practical limited zonal illumination field. Our setup can be implemented with commodity components with no dedicated hardware. We employ a set of desktop monitors to illuminate a subject from a near-hemispherical zone and record One-Light-At-A-Time (OLAT) images from multiple viewpoints. We further extrapolate sampling of incident illumination directions beyond the frontal coverage of the monitors by repeating OLAT captures with the subject rotation in relation to the capture setup. Finally, we train our proposed skip-assisted autoencoder and latent diffusion based generative method to learn a high-quality continuous representation of the reflectance function without requiring explicit alignment of the data captured from various viewpoints. This method enables smooth lighting animation for high-frequency reflectance functions and effectively manages to extend incident lighting beyond the practical capture setup's illumination zone. Compared to state-of-the-art methods, our approach achieves superior image-based relighting results, capturing finer skin pore details and extending to passive performance video relighting.Item Learning Self-Shadowing for Clothed Human Bodies(The Eurographics Association, 2024) Einabadi, Farshad; Guillemaut, Jean-Yves; Hilton, Adrian; Haines, Eric; Garces, ElenaThis paper proposes to learn self-shadowing on full-body, clothed human postures from monocular colour image input, by supervising a deep neural model. The proposed approach implicitly learns the articulated body shape in order to generate self-shadow maps without seeking to reconstruct explicitly or estimate parametric 3D body geometry. Furthermore, it is generalisable to different people without per-subject pre-training, and has fast inference timings. The proposed neural model is trained on self-shadow maps rendered from 3D scans of real people for various light directions. Inference of shadow maps for a given illumination is performed from only 2D image input. Quantitative and qualitative experiments demonstrate comparable results to the state of the art whilst being monocular and achieving a considerably faster inference time. We provide ablations of our methodology and further show how the inferred self-shadow maps can benefit monocular full-body human relighting.