Volume 41 (2022)
Permanent URI for this community
Browse
Browsing Volume 41 (2022) by Subject "Animation"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item Efficient and Stable Simulation of Inextensible Cosserat Rods by a Compact Representation(The Eurographics Association and John Wiley & Sons Ltd., 2022) Zhao, Chongyao; Lin, Jinkeng; Wang, Tianyu; Bao, Hujun; Huang, Jin; Umetani, Nobuyuki; Wojtan, Chris; Vouga, EtiennePiecewise linear inextensible Cosserat rods are usually represented by Cartesian coordinates of vertices and quaternions on the segments. Such representations use excessive degrees of freedom (DOFs), and need many additional constraints, which causes unnecessary numerical difficulties and computational burden for simulation. We propose a simple yet compact representation that exactly matches the intrinsic DOFs and naturally satisfies all such constraints. Specifically, viewing a rod as a chain of rigid segments, we encode its shape as the Cartesian coordinates of its root vertex, and use axis-angle representation for the material frame on each segment. Under our representation, the Hessian of the implicit time-stepping has special non-zero patterns. Exploiting such specialties, we can solve the associated linear equations in nearly linear complexity. Furthermore, we carefully designed a preconditioner, which is proved to be always symmetric positive-definite and accelerates the PCG solver in one or two orders of magnitude compared with the widely used block-diagonal one. Compared with other technical choices including Super-Helices, a specially designed compact representation for inextensible Cosserat rods, our method achieves better performance and stability, and can simulate an inextensible Cosserat rod with hundreds of vertices and tens of collisions in real time under relatively large time steps.Item Generating Upper-Body Motion for Real-Time Characters Making their Way through Dynamic Environments(The Eurographics Association and John Wiley & Sons Ltd., 2022) Alvarado, Eduardo; Rohmer, Damien; Cani, Marie-Paule; Dominik L. Michels; Soeren PirkReal-time character animation in dynamic environments requires the generation of plausible upper-body movements regardless of the nature of the environment, including non-rigid obstacles such as vegetation. We propose a flexible model for upper-body interactions, based on the anticipation of the character's surroundings, and on antagonistic controllers to adapt the amount of muscular stiffness and response time to better deal with obstacles. Our solution relies on a hybrid method for character animation that couples a keyframe sequence with kinematic constraints and lightweight physics. The dynamic response of the character's upper-limbs leverages antagonistic controllers, allowing us to tune tension/relaxation in the upper-body without diverging from the reference keyframe motion. A new sight model, controlled by procedural rules, enables high-level authoring of the way the character generates interactions by adapting its stiffness and reaction time. As results show, our real-time method offers precise and explicit control over the character's behavior and style, while seamlessly adapting to new situations. Our model is therefore well suited for gaming applications.Item Harmonic Shape Interpolation on Multiply-connected Planar Domains(The Eurographics Association and John Wiley & Sons Ltd., 2022) Shi, Dongbo; Chen, Renjie; Campen, Marcel; Spagnuolo, MichelaShape interpolation is a fundamental problem in computer graphics. Recently, there have been some interpolation methods developed which guarantee that the results are of bounded amount of geometric distortion, hence ensure high quality interpolation. However, none of these methods is applicable to shapes within the multiply-connected domains. In this work, we develop an interpolation scheme for harmonic mappings, that specifically addresses this limitation. We opt to interpolate the pullback metric of the input harmonic maps as proposed by Chen et al. [CWKBC13]. However, the interpolated metric does not correspond to any planar mapping, which is the main challenge in the interpolation problem for multiply-connected domains. We propose to solve this by projecting the interpolated metric into the planar harmonic mapping space. Specifically, we develop a Newton iteration to minimize the isometric distortion of the intermediate mapping, with respect to the interpolated metric. For more efficient Newton iteration, we further derived a simple analytic formula for the positive semidefinite (PSD) projection of the Hessian matrix of our distortion energy. Through extensive experiments and comparisons with the state-of-the-art, we demonstrate the efficacy and robustness of our method for various inputs.Item Interaction Mix and Match: Synthesizing Close Interaction using Conditional Hierarchical GAN with Multi-Hot Class Embedding(The Eurographics Association and John Wiley & Sons Ltd., 2022) Goel, Aman; Men, Qianhui; Ho, Edmond S. L.; Dominik L. Michels; Soeren PirkSynthesizing multi-character interactions is a challenging task due to the complex and varied interactions between the characters. In particular, precise spatiotemporal alignment between characters is required in generating close interactions such as dancing and fighting. Existing work in generating multi-character interactions focuses on generating a single type of reactive motion for a given sequence which results in a lack of variety of the resultant motions. In this paper, we propose a novel way to create realistic human reactive motions which are not presented in the given dataset by mixing and matching different types of close interactions. We propose a Conditional Hierarchical Generative Adversarial Network with Multi-Hot Class Embedding to generate the Mix and Match reactive motions of the follower from a given motion sequence of the leader. Experiments are conducted on both noisy (depth-based) and high-quality (MoCap-based) interaction datasets. The quantitative and qualitative results show that our approach outperforms the state-of-the-art methods on the given datasets. We also provide an augmented dataset with realistic reactive motions to stimulate future research in this area.Item Monocular Facial Performance Capture Via Deep Expression Matching(The Eurographics Association and John Wiley & Sons Ltd., 2022) Bailey, Stephen W.; Riviere, Jérémy; Mikkelsen, Morten; O'Brien, James F.; Dominik L. Michels; Soeren PirkFacial performance capture is the process of automatically animating a digital face according to a captured performance of an actor. Recent developments in this area have focused on high-quality results using expensive head-scanning equipment and camera rigs. These methods produce impressive animations that accurately capture subtle details in an actor's performance. However, these methods are accessible only to content creators with relatively large budgets. Current methods using inexpensive recording equipment generally produce lower quality output that is unsuitable for many applications. In this paper, we present a facial performance capture method that does not require facial scans and instead animates an artist-created model using standard blendshapes. Furthermore, our method gives artists high-level control over animations through a workflow similar to existing commercial solutions. Given a recording, our approach matches keyframes of the video with corresponding expressions from an animated library of poses. A Gaussian process model then computes the full animation by interpolating from the set of matched keyframes. Our expression-matching method computes a low-dimensional latent code from an image that represents a facial expression while factoring out the facial identity. Images depicting similar facial expressions are identified by their proximity in the latent space. In our results, we demonstrate the fidelity of our expression-matching method. We also compare animations generated with our approach to animations generated with commercially available software.Item Pose Representations for Deep Skeletal Animation(The Eurographics Association and John Wiley & Sons Ltd., 2022) Andreou, Nefeli; Aristidou, Andreas; Chrysanthou, Yiorgos; Dominik L. Michels; Soeren PirkData-driven skeletal animation relies on the existence of a suitable learning scheme, which can capture the rich context of motion. However, commonly used motion representations often fail to accurately encode the full articulation of motion, or present artifacts. In this work, we address the fundamental problem of finding a robust pose representation for motion, suitable for deep skeletal animation, one that can better constrain poses and faithfully capture nuances correlated with skeletal characteristics. Our representation is based on dual quaternions, the mathematical abstractions with well-defined operations, which simultaneously encode rotational and positional orientation, enabling a rich encoding, centered around the root. We demonstrate that our representation overcomes common motion artifacts, and assess its performance compared to other popular representations. We conduct an ablation study to evaluate the impact of various losses that can be incorporated during learning. Leveraging the fact that our representation implicitly encodes skeletal motion attributes, we train a network on a dataset comprising of skeletons with different proportions, without the need to retarget them first to a universal skeleton, which causes subtle motion elements to be missed. Qualitative results demonstrate the usefulness of the parameterization in skeleton-specific synthesis.Item Voice2Face: Audio-driven Facial and Tongue Rig Animations with cVAEs(The Eurographics Association and John Wiley & Sons Ltd., 2022) Villanueva Aylagas, Monica; Anadon Leon, Hector; Teye, Mattias; Tollmar, Konrad; Dominik L. Michels; Soeren PirkWe present Voice2Face: a Deep Learning model that generates face and tongue animations directly from recorded speech. Our approach consists of two steps: a conditional Variational Autoencoder generates mesh animations from speech, while a separate module maps the animations to rig controller space. Our contributions include an automated method for speech style control, a method to train a model with data from multiple quality levels, and a method for animating the tongue. Unlike previous works, our model generates animations without speaker-dependent characteristics while allowing speech style control. We demonstrate through a user study that Voice2Face significantly outperforms a comparative state-of-the-art model in terms of perceived animation quality, and our quantitative evaluation suggests that Voice2Face yields more accurate lip closure in speech with bilabials through our speech style optimization. Both evaluations also show that our data quality conditioning scheme outperforms both an unconditioned model and a model trained with a smaller high-quality dataset. Finally, the user study shows a preference for animations including tongue. Results from our model can be seen at https://go.ea.com/voice2face.