Volume 39 (2020)
Permanent URI for this community
Browse
Browsing Volume 39 (2020) by Subject "3D face"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item A Cross‐Dimension Annotations Method for 3D Structural Facial Landmark Extraction(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Gong, Xun; Chen, Ping; Zhang, Zhemin; Chen, Ke; Xiang, Yue; Li, Xin; Benes, Bedrich and Hauser, HelwigRecent methods for 2D facial landmark localization perform well on close‐to‐frontal faces, but 2D landmarks are insufficient to represent 3D structure of a facial shape. For applications that require better accuracy, such as facial motion capture and 3D shape recovery, 3DA‐2D (2D Projections of 3D Facial Annotations) is preferred. Inferring the 3D structure from a single image is an problem whose accuracy and robustness are not always guaranteed. This paper aims to solve accurate 2D facial landmark localization and the transformation between 2D and 3DA‐2D landmarks. One way to increase the accuracy is to input more precisely annotated facial images. The traditional cascaded regressions cannot effectively handle large or noisy training data sets. In this paper, we propose a Mini‐Batch Cascaded Regressions (MBCR) method that can iteratively train a robust model from a large data set. Benefiting from the incremental learning strategy and a small learning rate, MBCR is robust to noise in training data. We also propose a new Cross‐Dimension Annotations Conversion (CDAC) method to map facial landmarks from 2D to 3DA‐2D coordinates and vice versa. The experimental results showed that CDAC combined with MBCR outperforms the‐state‐of‐the‐art methods in 3DA‐2D facial landmark localization. Moreover, CDAC can run efficiently at up to 110 on a 3.4 GHz‐CPU workstation. Thus, CDAC provides a solution to transform existing 2D alignment methods into 3DA‐2D ones without slowing down the speed. Training and testing code as well as the data set can be downloaded from https://github.com/SWJTU‐3DVision/CDAC.Item A Discriminative Multi‐Channel Facial Shape (MCFS) Representation and Feature Extraction for 3D Human Faces(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Gong, Xun; Li, Xin; Li, Tianrui; Liang, Yongqing; Benes, Bedrich and Hauser, HelwigBuilding an effective representation for 3D face geometry is essential for face analysis tasks, that is, landmark detection, face recognition and reconstruction. This paper proposes to use a Multi‐Channel Facial Shape (MCFS) representation that consists of depth, hand‐engineered feature and attention maps to construct a 3D facial descriptor. And, a multi‐channel adjustment mechanism, named filtered squeeze and reversed excitation (FSRE), is proposed to re‐organize MCFS data. To assign a suitable weight for each channel, FSRE is able to learn the importance of each layer automatically in the training phase. MCFS and FSRE blocks collaborate together effectively to build a robust 3D facial shape representation, which has an excellent discriminative ability. Extensive experimental results, testing on both high‐resolution and low‐resolution face datasets, show that facial features extracted by our framework outperform existing methods. This representation is stable against occlusions, data corruptions, expressions and pose variations. Also, unlike traditional methods of 3D face feature extraction, which always take minutes to create 3D features, our system can run in real time.