Browsing by Author "Su, Zhuo"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Feature Representation for Highâresolution Clothed Human Reconstruction(© 2023 Eurographics â The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Pu, Juncheng; Liu, Li; Fu, Xiaodong; Su, Zhuo; Liu, Lijun; Peng, Wei; Hauser, Helwig and Alliez, PierreDetailed and accurate feature representation is essential for highâresolution reconstruction of clothed human. Herein we introduce a unified feature representation for clothed human reconstruction, which can adapt to changeable posture and various clothing details. The whole method can be divided into two parts: the human shape feature representation and the details feature representation. Specifically, we firstly combine the voxel feature learned from semantic voxel with the pixel feature from input image as an implicit representation for human shape. Then, the details feature mixed with the clothed layer feature and the normal feature is used to guide the multiâlayer perceptron to capture geometric surface details. The key difference from existing methods is that we use the clothing semantics to infer clothed layer information, and further restore the layer details with geometric height. We qualitative and quantitative experience results demonstrate that proposed method outperforms existing methods in terms of handling limb swing and clothing details. Our method provides a new solution for clothed human reconstruction with highâresolution details (style, wrinkles and clothed layers), and has good potential in threeâdimensional virtual tryâon and digital characters.Item OaIF: OcclusionâAware Implicit Function for Clothed Human Reâconstruction(© 2023 Eurographics â The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Tan, Yudi; Guan, Boliang; Zhou, Fan; Su, Zhuo; Hauser, Helwig and Alliez, PierreClothed human reâconstruction from a monocular image is challenging due to occlusion, depthâambiguity and variations of body poses. Recently, shape representation based on an implicit function, compared to explicit representation such as mesh and voxel, is more capable with complex topology of clothed human. This is mainly achieved by using pixelâaligned features, facilitating implicit function to capture local details. But such methods utilize an identical feature map for all sampled points to get local features, making their models occlusionâagnostic in the encoding stage. The decoder, as implicit function, only maps features and does not take occlusion into account explicitly. Thus, these methods fail to generalize well in poses with severe selfâocclusion. To address this, we present OaIF to encode local features conditioned in visibility of SMPL vertices. OaIF projects SMPL vertices onto image plane to obtain image features masked by visibility. Vertices features integrated with geometry information of mesh are then feed into a GAT network to encode jointly. We query hybrid features and occlusion factors for points through cross attention and learn occupancy fields for clothed human. The experiments demonstrate that OaIF achieves more robust and accurate reâconstruction than the state of the art on both public datasets and wild images.Item Rain Wiper: An Incremental RandomlyWired Network for Single Image Deraining(The Eurographics Association and John Wiley & Sons Ltd., 2019) Liang, Xiwen; Qiu, Bin; Su, Zhuo; Gao, Chengying; Shi, Xiaohong; Wang, Ruomei; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonSingle image rain removal is a challenging ill-posed problem due to various shapes and densities of rain streaks. We present a novel incremental randomly wired network (IRWN) for single image deraining. Different from previous methods, most structures of modules in IRWN are generated by a stochastic network generator based on the random graph theory, which ease the burden of manual design and further help to characterize more complex rain streaks. To decrease network parameters and extract more details efficiently, the image pyramid is fused via the multi-scale network structure. An incremental rectified loss is proposed to better remove rain streaks in different rain conditions and recover the texture information of target objects. Extensive experiments on synthetic and real-world datasets demonstrate that the proposed method outperforms the state-ofthe- art methods significantly. In addition, an ablation study is conducted to illustrate the improvements obtained by different modules and loss items in IRWN.