Browsing by Author "Fu, Yanping"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Depth-Aware Shadow Removal(The Eurographics Association and John Wiley & Sons Ltd., 2022) Fu, Yanping; Gai, Zhenyu; Zhao, Haifeng; Zhang, Shaojie; Shan, Ying; Wu, Yang; Tang, Jin; Umetani, Nobuyuki; Wojtan, Chris; Vouga, EtienneShadow removal from a single image is an ill-posed problem because shadow generation is affected by the complex interactions of geometry, albedo, and illumination. Most recent deep learning-based methods try to directly estimate the mapping between the non-shadow and shadow image pairs to predict the shadow-free image. However, they are not very effective for shadow images with complex shadows or messy backgrounds. In this paper, we propose a novel end-to-end depth-aware shadow removal method without using depth images, which estimates depth information from RGB images and leverages the depth feature as guidance to enhance shadow removal and refinement. The proposed framework consists of three components, including depth prediction, shadow removal, and boundary refinement. First, the depth prediction module is used to predict the corresponding depth map of the input shadow image. Then, we propose a new generative adversarial network (GAN) method integrated with depth information to remove shadows in the RGB image. Finally, we propose an effective boundary refinement framework to alleviate the artifact around boundaries after shadow removal by depth cues. We conduct experiments on several public datasets and real-world shadow images. The experimental results demonstrate the efficiency of the proposed method and superior performance against state-of-the-art methods.Item Pyramid Multi-View Stereo with Local Consistency(The Eurographics Association and John Wiley & Sons Ltd., 2019) Liao, Jie; Fu, Yanping; Yan, Qingan; Xiao, Chunxia; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonIn this paper, we propose a PatchMatch-based Multi-View Stereo (MVS) algorithm which can efficiently estimate geometry for the textureless area. Conventional PatchMatch-based MVS algorithms estimate depth and normal hypotheses mainly by optimizing photometric consistency metrics between patch in the reference image and its projection on other images. The photometric consistency works well in textured regions but can not discriminate textureless regions, which makes geometry estimation for textureless regions hard work. To address this issue, we introduce the local consistency. Based on the assumption that neighboring pixels with similar colors likely belong to the same surface and share approximate depth-normal values, local consistency guides the depth and normal estimation with geometry from neighboring pixels with similar colors. To fasten the convergence of pixelwise local consistency across the image, we further introduce a pyramid architecture similar to previous work which can also provide coarse estimation at upper levels. We validate the effectiveness of our method on the ETH3D benchmark and Tanks and Temples benchmark. Results show that our method outperforms the state-of-the-art.