43-Issue 6
Permanent URI for this collection
Browse
Browsing 43-Issue 6 by Subject "facial animation"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Infinite 3D Landmarks: Improving Continuous 2D Facial Landmark Detection(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Chandran, P.; Zoss, G.; Gotardo, P.; Bradley, D.; Alliez, Pierre; Wimmer, MichaelIn this paper, we examine three important issues in the practical use of state‐of‐the‐art facial landmark detectors and show how a combination of specific architectural modifications can directly improve their accuracy and temporal stability. First, many facial landmark detectors require a face normalization step as a pre‐process, often accomplished by a separately trained neural network that crops and resizes the face in the input image. There is no guarantee that this pre‐trained network performs optimal face normalization for the task of landmark detection. Thus, we instead analyse the use of a spatial transformer network that is trained alongside the landmark detector in an unsupervised manner, jointly learning an optimal face normalization and landmark detection by a single neural network. Second, we show that modifying the output head of the landmark predictor to infer landmarks in a canonical 3D space rather than directly in 2D can further improve accuracy. To convert the predicted 3D landmarks into screen‐space, we additionally predict the camera intrinsics and head pose from the input image. As a side benefit, this allows to predict the 3D face shape from a given image only using 2D landmarks as supervision, which is useful in determining landmark visibility among other things. Third, when training a landmark detector on multiple datasets at the same time, annotation inconsistencies across datasets forces the network to produce a sub‐optimal average. We propose to add a semantic correction network to address this issue. This additional lightweight neural network is trained alongside the landmark detector, without requiring any additional supervision. While the insights of this paper can be applied to most common landmark detectors, we specifically target a recently proposed continuous 2D landmark detector to demonstrate how each of our additions leads to meaningful improvements over the state‐of‐the‐art on standard benchmarks.Item VolTeMorph: Real‐time, Controllable and Generalizable Animation of Volumetric Representations(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Garbin, Stephan J.; Kowalski, Marek; Estellers, Virginia; Szymanowicz, Stanislaw; Rezaeifar, Shideh; Shen, Jingjing; Johnson, Matthew A.; Valentin, Julien; Alliez, Pierre; Wimmer, MichaelThe recent increase in popularity of volumetric representations for scene reconstruction and novel view synthesis has put renewed focus on animating volumetric content at high visual quality and in real‐time. While implicit deformation methods based on learned functions can produce impressive results, they are ‘black boxes’ to artists and content creators, they require large amounts of training data to generalize meaningfully, and they do not produce realistic extrapolations outside of this data. In this work, we solve these issues by introducing a volume deformation method which is real‐time even for complex deformations, easy to edit with off‐the‐shelf software and can extrapolate convincingly. To demonstrate the versatility of our method, we apply it in two scenarios: physics‐based object deformation and telepresence where avatars are controlled using blendshapes. We also perform thorough experiments showing that our method compares favourably to both volumetric approaches combined with implicit deformation and methods based on mesh deformation.