Action Unit Driven Facial Expression Synthesis from a Single Image with Patch Attentive GAN
dc.contributor.author | Zhao, Yong | en_US |
dc.contributor.author | Yang, Le | en_US |
dc.contributor.author | Pei, Ercheng | en_US |
dc.contributor.author | Oveneke, Meshia Cédric | en_US |
dc.contributor.author | Alioscha‐Perez, Mitchel | en_US |
dc.contributor.author | Li, Longfei | en_US |
dc.contributor.author | Jiang, Dongmei | en_US |
dc.contributor.author | Sahli, Hichem | en_US |
dc.contributor.editor | Benes, Bedrich and Hauser, Helwig | en_US |
dc.date.accessioned | 2021-10-08T07:38:06Z | |
dc.date.available | 2021-10-08T07:38:06Z | |
dc.date.issued | 2021 | |
dc.description.abstract | Recent advances in generative adversarial networks (GANs) have shown tremendous success for facial expression generation tasks. However, generating vivid and expressive facial expressions at Action Units (AUs) level is still challenging, due to the fact that automatic facial expression analysis for AU intensity itself is an unsolved difficult task. In this paper, we propose a novel synthesis‐by‐analysis approach by leveraging the power of GAN framework and state‐of‐the‐art AU detection model to achieve better results for AU‐driven facial expression generation. Specifically, we design a novel discriminator architecture by modifying the patch‐attentive AU detection network for AU intensity estimation and combine it with a global image encoder for adversarial learning to force the generator to produce more expressive and realistic facial images. We also introduce a balanced sampling approach to alleviate the imbalanced learning problem for AU synthesis. Extensive experimental results on DISFA and DISFA+ show that our approach outperforms the state‐of‐the‐art in terms of photo‐realism and expressiveness of the facial expression quantitatively and qualitatively. | en_US |
dc.description.number | 6 | |
dc.description.sectionheaders | Articles | |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.volume | 40 | |
dc.identifier.doi | 10.1111/cgf.14202 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.pages | 47-61 | |
dc.identifier.uri | https://doi.org/10.1111/cgf.14202 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf14202 | |
dc.publisher | © 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd | en_US |
dc.subject | facial animation | |
dc.subject | animation | |
dc.subject | image/video editing | |
dc.subject | image and video processing | |
dc.subject | image‐based rendering | |
dc.subject | rendering | |
dc.title | Action Unit Driven Facial Expression Synthesis from a Single Image with Patch Attentive GAN | en_US |