Browsing by Author "Zheng, Jianmin"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Half‐body Portrait Relighting with Overcomplete Lighting Representation(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Song, Guoxian; Cham, Tat‐Jen; Cai, Jianfei; Zheng, Jianmin; Benes, Bedrich and Hauser, HelwigWe present a neural‐based model for relighting a half‐body portrait image by simply referring to another portrait image with the desired lighting condition. Rather than following classical inverse rendering methodology that involves estimating normals, albedo and environment maps, we implicitly encode the subject and lighting in a latent space, and use these latent codes to generate relighted images by neural rendering. A key technical innovation is the use of a novel overcomplete lighting representation, which facilitates lighting interpolation in the latent space, as well as helping regularize the self‐organization of the lighting latent space during training. In addition, we propose a novel multiplicative neural render that more effectively combines the subject and lighting latent codes for rendering. We also created a large‐scale photorealistic rendered relighting dataset for training, which allows our model to generalize well to real images. Extensive experiments demonstrate that our system not only outperforms existing methods for referral‐based portrait relighting, but also has the capability generate sequences of relighted images via lighting rotations.Item Shading‐Based Surface Recovery Using Subdivision‐Based Representation(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Deng, Teng; Zheng, Jianmin; Cai, Jianfei; Cham, Tat‐Jen; Chen, Min and Benes, BedrichThis paper presents subdivision‐based representations for both lighting and geometry in shape‐from‐shading. A very recent shading‐based method introduced a per‐vertex overall illumination model for surface reconstruction, which has advantage of conveniently handling complicated lighting condition and avoiding explicit estimation of visibility and varied albedo. However, due to its discrete nature, the per‐vertex overall illumination requires a large amount of memory and lacks intrinsic coherence. To overcome these problems, in this paper we propose to use classic subdivision to define the basic smooth lighting function and surface, and introduce additional independent variables into the subdivision to adaptively model sharp changes of illumination and geometry. Compared to previous works, the new model not only preserves the merits of the per‐vertex illumination model, but also greatly reduces the number of variables required in surface recovery and intrinsically regularizes the illumination vectors and the surface. These features make the new model very suitable for multi‐view stereo surface reconstruction under general, unknown illumination condition. Particularly, a variational surface reconstruction method built upon the subdivision representations for lighting and geometry is developed. The experiments on both synthetic and real‐world data sets have demonstrated that the proposed method can achieve memory efficiency and improve surface detail recovery.This paper presents subdivision‐based representations for both lighting and geometry in shape‐from‐shading. A very recent shading‐based method introduced a per‐vertex overall illumination model for surface reconstruction, which has advantage of conveniently handling complicated lighting condition and avoiding explicit estimation of visibility and varied albedo. However, due to its discrete nature, the per‐vertex overall illumination requires a large amount of memory and lacks intrinsic coherence. To overcome these problems, in this paper we propose to use classic subdivision to define the basic smooth lighting function and surface, and introduce additional independent variables into the subdivision to adaptively model sharp changes of illumination and geometry. Compared to previous works, the new model not only preserves the merits of the per‐vertex illumination model, but also greatly reduces the number of variables required in surface recovery and intrinsically regularizes the illumination vectors and the surface. These features make the new model very suitable for multi‐view stereo surface reconstruction under general, unknown illumination condition. Particularly, a variational surface reconstruction method built upon the subdivision representations for lighting and geometry is developed. The experiments on both synthetic and real‐world data sets have demonstrated that the proposed method can achieve memory efficiency and improve surface detail recovery.Item Unsupervised Dense Light Field Reconstruction with Occlusion Awareness(The Eurographics Association and John Wiley & Sons Ltd., 2019) Ni, Lixia; Jiang, Haiyong; Cai, Jianfei; Zheng, Jianmin; Li, Haifeng; Liu, Xu; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonLight field (LF) reconstruction is a fundamental technique in light field imaging and has applications in both software and hardware aspects. This paper presents an unsupervised learning method for LF-oriented view synthesis, which provides a simple solution for generating quality light fields from a sparse set of views. The method is built on disparity estimation and image warping. Specifically, we first use per-view disparity as a geometry proxy to warp input views to novel views. Then we compensate the occlusion with a network by a forward-backward warping process. Cycle-consistency between different views are explored to enable unsupervised learning and accurate synthesis. The method overcomes the drawbacks of fully supervised learning methods that require large labeled training dataset and epipolar plane image based interpolation methods that do not make full use of geometry consistency in LFs. Experimental results demonstrate that the proposed method can generate high quality views for LF, which outperforms unsupervised approaches and is comparable to fully-supervised approaches.Item Visual Analysis of the Impact of Neural Network Hyper-Parameters(The Eurographics Association, 2020) Jönsson, Daniel; Eilertsen, Gabriel; Shi, Hezi; Zheng, Jianmin; Ynnerman, Anders; Unger, Jonas; Archambault, Daniel and Nabney, Ian and Peltonen, JaakkoWe present an analysis of the impact of hyper-parameters for an ensemble of neural networks using tailored visualization techniques to understand the complicated relationship between hyper-parameters and model performance. The high-dimensional error surface spanned by the wide range of hyper-parameters used to specify and optimize neural networks is difficult to characterize - it is non-convex and discontinuous, and there could be complex local dependencies between hyper-parameters. To explore these dependencies, we make use of a large number of sampled relations between hyper-parameters and end performance, retrieved from thousands of individually trained convolutional neural network classifiers. We use a structured selection of visualization techniques to analyze the impact of different combinations of hyper-parameters. The results reveal how complicated dependencies between hyper-parameters influence the end performance, demonstrating how the complete picture painted by considering a large number of trainings simultaneously can aid in understanding the impact of hyper-parameter combinations.