40-Issue 6
Permanent URI for this collection
Browse
Browsing 40-Issue 6 by Subject "animation"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Action Unit Driven Facial Expression Synthesis from a Single Image with Patch Attentive GAN(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Zhao, Yong; Yang, Le; Pei, Ercheng; Oveneke, Meshia Cédric; Alioscha‐Perez, Mitchel; Li, Longfei; Jiang, Dongmei; Sahli, Hichem; Benes, Bedrich and Hauser, HelwigRecent advances in generative adversarial networks (GANs) have shown tremendous success for facial expression generation tasks. However, generating vivid and expressive facial expressions at Action Units (AUs) level is still challenging, due to the fact that automatic facial expression analysis for AU intensity itself is an unsolved difficult task. In this paper, we propose a novel synthesis‐by‐analysis approach by leveraging the power of GAN framework and state‐of‐the‐art AU detection model to achieve better results for AU‐driven facial expression generation. Specifically, we design a novel discriminator architecture by modifying the patch‐attentive AU detection network for AU intensity estimation and combine it with a global image encoder for adversarial learning to force the generator to produce more expressive and realistic facial images. We also introduce a balanced sampling approach to alleviate the imbalanced learning problem for AU synthesis. Extensive experimental results on DISFA and DISFA+ show that our approach outperforms the state‐of‐the‐art in terms of photo‐realism and expressiveness of the facial expression quantitatively and qualitatively.Item Parametric Skeletons with Reduced Soft‐Tissue Deformations(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Tapia, Javier; Romero, Cristian; Pérez, Jesús; Otaduy, Miguel A.; Benes, Bedrich and Hauser, HelwigWe present a method to augment parametric skeletal models with subspace soft‐tissue deformations. We combine the benefits of data‐driven skeletal models, i.e. accurate replication of contact‐free static deformations, with the benefits of pure physics‐based models, i.e. skin and skeletal reaction to contact and inertial motion with two‐way coupling. We succeed to do so in a highly efficient manner, thanks to a careful choice of reduced model for the subspace deformation. With our method, it is easy to design expressive reduced models with efficient yet accurate force computations, without the need for training deformation examples. We demonstrate the application of our method to parametric models of human bodies, SMPL, and hands, MANO, with interactive simulations of contact with nonlinear soft‐tissue deformation and skeletal response.>Item A Rapid, End‐to‐end, Generative Model for Gaseous Phenomena from Limited Views(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Qiu, Sheng; Li, Chen; Wang, Changbo; Qin, Hong; Benes, Bedrich and Hauser, HelwigDespite the rapid development and proliferation of computer graphics hardware devices for scene capture in the most recent decade, the high‐resolution 3D/4D acquisition of gaseous scenes (e.g., smokes) in real time remains technically challenging in graphics research nowadays. In this paper, we explore a hybrid approach to simultaneously taking advantage of both the model‐centric method and the data‐driven method. Specifically, this paper develops a novel conditional generative model to rapidly reconstruct the temporal density and velocity fields of gaseous phenomena based on the sequence of two projection views. With the data‐driven method, we can achieve the strong coupling of density update and the estimation of flow motion, as a result, we can greatly improve the reconstruction performance for smoke scenes. First, we employ a conditional generative network to generate the initial density field from input projection views and estimate the flow motion based on the adjacent frames. Second, we utilize the differentiable advection layer and design a velocity estimation network with the long‐term mechanism to help achieve the end‐to‐end training and more stable graphics effects. Third, we can re‐simulate the input scene with flexible coupling effects based on the estimated velocity field subject to artists' guidance or user interaction. Moreover, our generative model could accommodate single projection view as input. In practice, more input projection views are enabling and facilitating the high‐fidelity reconstruction with more realistic and finer details. We have conducted extensive experiments to confirm the effectiveness, efficiency, and robustness of our new method compared with the previous state‐of‐the‐art techniques.