Browsing by Author "Ma, Chongyang"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item 3D Keypoint Estimation Using Implicit Representation Learning(The Eurographics Association and John Wiley & Sons Ltd., 2023) Zhu, Xiangyu; Du, Dong; Huang, Haibin; Ma, Chongyang; Han, Xiaoguang; Memari, Pooran; Solomon, JustinIn this paper, we tackle the challenging problem of 3D keypoint estimation of general objects using a novel implicit representation. Previous works have demonstrated promising results for keypoint prediction through direct coordinate regression or heatmap-based inference. However, these methods are commonly studied for specific subjects, such as human bodies and faces, which possess fixed keypoint structures. They also suffer in several practical scenarios where explicit or complete geometry is not given, including images and partial point clouds. Inspired by the recent success of advanced implicit representation in reconstruction tasks, we explore the idea of using an implicit field to represent keypoints. Specifically, our key idea is employing spheres to represent 3D keypoints, thereby enabling the learnability of the corresponding signed distance field. Explicit keypoints can be extracted subsequently by our algorithm based on the Hough transform. Quantitative and qualitative evaluations also show the superiority of our representation in terms of prediction accuracy.Item Implicit Neural Deformation for Sparse-View Face Reconstruction(The Eurographics Association and John Wiley & Sons Ltd., 2022) Li, Moran; Huang, Haibin; Zheng, Yi; Li, Mengtian; Sang, Nong; Ma, Chongyang; Umetani, Nobuyuki; Wojtan, Chris; Vouga, EtienneIn this work, we present a new method for 3D face reconstruction from sparse-view RGB images. Unlike previous methods which are built upon 3D morphable models (3DMMs) with limited details, we leverage an implicit representation to encode rich geometric features. Our overall pipeline consists of two major components, including a geometry network, which learns a deformable neural signed distance function (SDF) as the 3D face representation, and a rendering network, which learns to render on-surface points of the neural SDF to match the input images via self-supervised optimization. To handle in-the-wild sparse-view input of the same target with different expressions at test time, we propose residual latent code to effectively expand the shape space of the learned implicit face representation as well as a novel view-switch loss to enforce consistency among different views. Our experimental results on several benchmark datasets demonstrate that our approach outperforms alternative baselines and achieves superior face reconstruction results compared to state-of-the-art methods.Item Multi-Modal Face Stylization with a Generative Prior(The Eurographics Association and John Wiley & Sons Ltd., 2023) Li, Mengtian; Dong, Yi; Lin, Minxuan; Huang, Haibin; Wan, Pengfei; Ma, Chongyang; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.In this work, we introduce a new approach for face stylization. Despite existing methods achieving impressive results in this task, there is still room for improvement in generating high-quality artistic faces with diverse styles and accurate facial reconstruction. Our proposed framework, MMFS, supports multi-modal face stylization by leveraging the strengths of StyleGAN and integrates it into an encoder-decoder architecture. Specifically, we use the mid-resolution and high-resolution layers of StyleGAN as the decoder to generate high-quality faces, while aligning its low-resolution layer with the encoder to extract and preserve input facial details. We also introduce a two-stage training strategy, where we train the encoder in the first stage to align the feature maps with StyleGAN and enable a faithful reconstruction of input faces. In the second stage, the entire network is fine-tuned with artistic data for stylized face generation. To enable the fine-tuned model to be applied in zero-shot and one-shot stylization tasks, we train an additional mapping network from the large-scale Contrastive-Language-Image-Pre-training (CLIP) space to a latent w+ space of fine-tuned StyleGAN. Qualitative and quantitative experiments show that our framework achieves superior performance in both one-shot and zero-shot face stylization tasks, outperforming state-of-the-art methods by a large margin.Item Real-Time Patch-Based Stylization of Portraits Using Generative Adversarial Network(The Eurographics Association, 2019) Futschik, David; Chai, Menglei; Cao, Chen; Ma, Chongyang; Stoliar, Aleksei; Korolev, Sergey; Tulyakov, Sergey; Kučera, Michal; Sýkora, Daniel; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenWe present a learning-based style transfer algorithm for human portraits which significantly outperforms current state-of-the-art in computational overhead while still maintaining comparable visual quality. We show how to design a conditional generative adversarial network capable to reproduce the output of Fišer et al.'s patch-based method [FJS*17] that is slow to compute but can deliver state-of-the-art visual quality. Since the resulting end-to-end network can be evaluated quickly on current consumer GPUs, our solution enables first real-time high-quality style transfer to facial videos that runs at interactive frame rates. Moreover, in cases when the original algorithmic approach of Fišer et al. fails our network can provide a more visually pleasing result thanks to generalization. We demonstrate the practical utility of our approach on a variety of different styles and target subjects.