41-Issue 8
Permanent URI for this collection
Browse
Browsing 41-Issue 8 by Issue Date
Now showing 1 - 20 of 31
Results Per Page
Sort Options
Item UnderPressure: Deep Learning for Foot Contact Detection, Ground Reaction Force Estimation and Footskate Cleanup(The Eurographics Association and John Wiley & Sons Ltd., 2022) Mourot, Lucas; Hoyet, Ludovic; Clerc, François Le; Hellier, Pierre; Dominik L. Michels; Soeren PirkHuman motion synthesis and editing are essential to many applications like video games, virtual reality, and film postproduction. However, they often introduce artefacts in motion capture data, which can be detrimental to the perceived realism. In particular, footskating is a frequent and disturbing artefact, which requires knowledge of foot contacts to be cleaned up. Current approaches to obtain foot contact labels rely either on unreliable threshold-based heuristics or on tedious manual annotation. In this article, we address automatic foot contact label detection from motion capture data with a deep learning based method. To this end, we first publicly release UNDERPRESSURE, a novel motion capture database labelled with pressure insoles data serving as reliable knowledge of foot contact with the ground. Then, we design and train a deep neural network to estimate ground reaction forces exerted on the feet from motion data and then derive accurate foot contact labels. The evaluation of our model shows that we significantly outperform heuristic approaches based on height and velocity thresholds and that our approach is much more robust when applied on motion sequences suffering from perturbations like noise or footskate. We further propose a fully automatic workflow for footskate cleanup: foot contact labels are first derived from estimated ground reaction forces. Then, footskate is removed by solving foot constraints through an optimisation-based inverse kinematics (IK) approach that ensures consistency with the estimated ground reaction forces. Beyond footskate cleanup, both the database and the method we propose could help to improve many approaches based on foot contact labels or ground reaction forces, including inverse dynamics problems like motion reconstruction and learning of deep motion models in motion synthesis or character animation. Our implementation, pre-trained model as well as links to database can be found at github.com/InterDigitalInc/UnderPressure.Item Generating Upper-Body Motion for Real-Time Characters Making their Way through Dynamic Environments(The Eurographics Association and John Wiley & Sons Ltd., 2022) Alvarado, Eduardo; Rohmer, Damien; Cani, Marie-Paule; Dominik L. Michels; Soeren PirkReal-time character animation in dynamic environments requires the generation of plausible upper-body movements regardless of the nature of the environment, including non-rigid obstacles such as vegetation. We propose a flexible model for upper-body interactions, based on the anticipation of the character's surroundings, and on antagonistic controllers to adapt the amount of muscular stiffness and response time to better deal with obstacles. Our solution relies on a hybrid method for character animation that couples a keyframe sequence with kinematic constraints and lightweight physics. The dynamic response of the character's upper-limbs leverages antagonistic controllers, allowing us to tune tension/relaxation in the upper-body without diverging from the reference keyframe motion. A new sight model, controlled by procedural rules, enables high-level authoring of the way the character generates interactions by adapting its stiffness and reaction time. As results show, our real-time method offers precise and explicit control over the character's behavior and style, while seamlessly adapting to new situations. Our model is therefore well suited for gaming applications.Item Facial Animation with Disentangled Identity and Motion using Transformers(The Eurographics Association and John Wiley & Sons Ltd., 2022) Chandran, Prashanth; Zoss, Gaspard; Gross, Markus; Gotardo, Paulo; Bradley, Derek; Dominik L. Michels; Soeren PirkWe propose a 3D+time framework for modeling dynamic sequences of 3D facial shapes, representing realistic non-rigid motion during a performance. Our work extends neural 3D morphable models by learning a motion manifold using a transformer architecture. More specifically, we derive a novel transformer-based autoencoder that can model and synthesize 3D geometry sequences of arbitrary length. This transformer naturally determines frame-to-frame correlations required to represent the motion manifold, via the internal self-attention mechanism. Furthermore, our method disentangles the constant facial identity from the time-varying facial expressions in a performance, using two separate codes to represent neutral identity and the performance itself within separate latent subspaces. Thus, the model represents identity-agnostic performances that can be paired with an arbitrary new identity code and fed through our new identity-modulated performance decoder; the result is a sequence of 3D meshes for the performance with the desired identity and temporal length. We demonstrate how our disentangled motion model has natural applications in performance synthesis, performance retargeting, key-frame interpolation and completion of missing data, performance denoising and retiming, and other potential applications that include full 3D body modeling.Item Tiled Characteristic Maps for Tracking Detailed Liquid Surfaces(The Eurographics Association and John Wiley & Sons Ltd., 2022) Narita, Fumiya; Ando, Ryoichi; Dominik L. Michels; Soeren PirkWe introduce tiled characteristic maps for level set method that accurately preserves both thin sheets and sharp edges over a long period of time. Instead of resorting to high-order differential schemes, we utilize the characteristics mapping method to minimize numerical diffusion induced by advection. We find that although a single characteristic map could be used to better preserve detailed geometry, it suffers from frequent global re-initialization due to the strong distortions that are locally generated. We show that when multiple localized tiled characteristic maps are used, this limitation is constrained only within tiles; enabling long-term preservation of detailed structures where little distortion is observed. When applied to liquid simulation, we demonstrate that at a reasonably amount of added computational cost, our method retains small-scale high-fidelity (e.g., splashes and waves) that is quickly smeared out or deleted with purely grid-based or particle level set methods.Item Learning Physics with a Hierarchical Graph Network(The Eurographics Association and John Wiley & Sons Ltd., 2022) Chentanez, Nuttapong; Jeschke, Stefan; Müller, Matthias; Macklin, Miles; Dominik L. Michels; Soeren PirkWe propose a hierarchical graph for learning physics and a novel way to handle obstacles. The finest level of the graph consist of the particles itself. Coarser levels consist of the cells of sparse grids with successively doubling cell sizes covering the volume occupied by the particles. The hierarchical structure allows for the information to propagate at great distance in a single message passing iteration. The novel obstacle handling allows the simulation to be obstacle aware without the need for ghost particles. We train the network to predict effective acceleration produced by multiple sub-steps of 3D multi-material material point method (MPM) simulation consisting of water, sand and snow with complex obstacles. Our network produces lower error, trains up to 7.0X faster and inferences up to 11.3X faster than [SGGP*20]. It is also, on average, about 3.7X faster compared to Taichi Elements simulation running on the same hardware in our tests.Item Voronoi Filters for Simulation Enrichment(The Eurographics Association and John Wiley & Sons Ltd., 2022) Casafranca, Juan J.; Otaduy, Miguel A.; Dominik L. Michels; Soeren PirkThe simulation of complex deformation problems often requires enrichment techniques that introduce local high-resolution detail on a generally coarse discretization. The use cases include spatial or temporal refinement of the discretization, the simulation of composite materials with phenomena occurring at different scales, or even codimensional simulation. We present an efficient simulation enrichment method for both local refinement of the discretization and codimensional effects. We dub our method Voronoi filters, as it combines two key computational elements. One is the use of kinematic filters to constrain coarse and fine deformations, and thus provide enrichment functions that are complementary to the coarse deformation. The other one is the use of a centroidal Voronoi discretization for the design of the enrichment functions, which adds high-resolution detail in a compact manner while preserving the rigid modes of coarse deformation. We demonstrate our method on simulation examples of composite materials, hybrid triangle-based and yarn-level simulation of cloth, or enrichment of flesh simulation with high-resolution detail.Item Surface-Only Dynamic Deformables using a Boundary Element Method(The Eurographics Association and John Wiley & Sons Ltd., 2022) Sugimoto, Ryusuke; Batty, Christopher; Hachisuka, Toshiya; Dominik L. Michels; Soeren PirkWe propose a novel surface-only method for simulating dynamic deformables without the need for volumetric meshing or volumetric integral evaluations. While based upon a boundary element method (BEM) for linear elastodynamics, our method goes beyond simple adoption of BEM by addressing several of its key limitations. We alleviate large displacement artifacts due to linear elasticity by extending BEM with a moving reference frame and surface-only fictitious forces, so that it only needs to handle deformations. To reduce memory and computational costs, we present a simple and practical method to compress the series of dense matrices required to simulate propagation of elastic waves over time. Furthermore, we explore a constraint enforcement mechanism and demonstrate the applicability of our method to general computer animation problems, such as frictional contact.Item Stability Analysis of Explicit MPM(The Eurographics Association and John Wiley & Sons Ltd., 2022) Bai, Song; Schroeder, Craig; Dominik L. Michels; Soeren PirkIn this paper we analyze the stability of the explicit material point method (MPM). We focus on PIC, APIC, and CPIC transfers using quadratic and cubic splines in two and three dimensions. We perform a fully three-dimensional Von Neumann stability analysis to study the behavior within the bulk of a material. This reveals the relationship between the sound speed, CFL number, and actual time step restriction and its dependence on discretization options. We note that boundaries are generally less stable than the interior, with stable time steps generally decreasing until the limit when particles become isolated. We then analyze the stability of a single particle to derive a novel time step restriction that stabilizes simulations at their boundaries. Finally, we show that for explicit MPM with APIC or CPIC transfers, there are pathological cases where growth is observed at arbitrarily small time steps sizes. While these cases do not necessarily pose a problem for practical usage, they do suggest that a guarantee of stability may be theoretically impossible and that necessary but not sufficient time step restrictions may be a necessary and practical compromise.Item Pose Representations for Deep Skeletal Animation(The Eurographics Association and John Wiley & Sons Ltd., 2022) Andreou, Nefeli; Aristidou, Andreas; Chrysanthou, Yiorgos; Dominik L. Michels; Soeren PirkData-driven skeletal animation relies on the existence of a suitable learning scheme, which can capture the rich context of motion. However, commonly used motion representations often fail to accurately encode the full articulation of motion, or present artifacts. In this work, we address the fundamental problem of finding a robust pose representation for motion, suitable for deep skeletal animation, one that can better constrain poses and faithfully capture nuances correlated with skeletal characteristics. Our representation is based on dual quaternions, the mathematical abstractions with well-defined operations, which simultaneously encode rotational and positional orientation, enabling a rich encoding, centered around the root. We demonstrate that our representation overcomes common motion artifacts, and assess its performance compared to other popular representations. We conduct an ablation study to evaluate the impact of various losses that can be incorporated during learning. Leveraging the fact that our representation implicitly encodes skeletal motion attributes, we train a network on a dataset comprising of skeletons with different proportions, without the need to retarget them first to a universal skeleton, which causes subtle motion elements to be missed. Qualitative results demonstrate the usefulness of the parameterization in skeleton-specific synthesis.Item SCA 2022 CGF 41-8: Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2022) Dominik L. Michels; Soeren Pirk; Dominik L. Michels; Soeren PirkItem Physically Based Shape Matching(The Eurographics Association and John Wiley & Sons Ltd., 2022) Müller, Matthias; Macklin, Miles; Chentanez, Nuttapong; Jeschke, Stefan; Dominik L. Michels; Soeren PirkThe shape matching method is a popular approach to simulate deformable objects in interactive applications due to its stability and simplicity. An important feature is that there is no need for a mesh since the method works on arbitrary local groups within a set of particles. A major drawback of shape matching is the fact that it is geometrically motivated and not derived from physical principles which makes calibration difficult. The fact that the method does not conserve volume can yield visual artifacts, e.g. when a tire is compressed but does not bulge. In this paper we present a new meshless simulation method that is related to shape matching but derived from continuous constitutive models. Volume conservation and stiffness can be specified with physical parameters. Further, if the elements of a tetrahedral mesh are used as groups, our method perfectly reproduces FEM based simulations.Item A Second-Order Explicit Pressure Projection Method for Eulerian Fluid Simulation(The Eurographics Association and John Wiley & Sons Ltd., 2022) Jiang, Junwei; Shen, Xiangda; Gong, Yuning; Fan, Zeng; Liu, Yanli; Xing, Guanyu; Ren, Xiaohua; Zhang, Yanci; Dominik L. Michels; Soeren PirkIn this paper, we propose a novel second-order explicit midpoint method to address the issue of energy loss and vorticity dissipation in Eulerian fluid simulation. The basic idea is to explicitly compute the pressure gradient at the middle time of each time step and apply it to the velocity field after advection. Theoretically, our solver can achieve higher accuracy than the first-order solvers at similar computational cost. On the other hand, our method is twice and even faster than the implicit second-order solvers at the cost of a small loss of accuracy. We have carried out a large number of 2D, 3D and numerical experiments to verify the effectiveness and availability of our algorithm.Item A Second Order Cone Programming Approach for Simulating Biphasic Materials(The Eurographics Association and John Wiley & Sons Ltd., 2022) Tang, Pengbin; Coros, Stelian; Thomaszewski, Bernhard; Dominik L. Michels; Soeren PirkStrain limiting is a widely used approach for simulating biphasic materials such as woven textiles and biological tissue that exhibit a soft elastic regime followed by a hard deformation limit. However, existing methods are either based on slowly converging local iterations, or offer no guarantees on convergence. In this work, we propose a new approach to strain limiting based on second order cone programming (SOCP). Our work is based on the key insight that upper bounds on per-triangle deformations lead to convex quadratic inequality constraints. Though nonlinear, these constraints can be reformulated as inclusion conditions on convex sets, leading to a second order cone programming problem-a convex optimization problem that a) is guaranteed to have a unique solution and b) allows us to leverage efficient conic programming solvers. We first cast strain limiting with anisotropic bounds on stretching as a quadratically constrained quadratic program (QCQP), then show how this QCQP can be mapped to a second order cone programming problem. We further propose a constraint reflection scheme and empirically show that it exhibits superior energy-preservation properties compared to conventional end-of-step projection methods. Finally, we demonstrate our prototype implementation on a set of examples and illustrate how different deformation limits can be used to model a wide range of material behaviors.Item Sketching Vocabulary for Crowd Motion(The Eurographics Association and John Wiley & Sons Ltd., 2022) Mathew, C. D. Tharindu; Benes, Bedrich; Aliaga, Daniel; Dominik L. Michels; Soeren PirkThis paper proposes and evaluates a sketching language to author crowd motion. It focuses on the path, speed, thickness, and density parameters of crowd motion. A sketch-based vocabulary is proposed for each parameter and evaluated in a user study against complex crowd scenes. A sketch recognition pipeline converts the sketches into a crowd simulation. The user study results show that 1) participants at various skill levels and can draw accurate crowd motion through sketching, 2) certain sketch styles lead to a more accurate representation of crowd parameters, and 3) sketching allows to produce complex crowd motions in a few seconds. The results show that some styles although accurate actually are less preferred over less accurate ones.Item Wassersplines for Neural Vector Field-Controlled Animation(The Eurographics Association and John Wiley & Sons Ltd., 2022) Zhang, Paul; Smirnov, Dmitriy; Solomon, Justin; Dominik L. Michels; Soeren PirkMuch of computer-generated animation is created by manipulating meshes with rigs. While this approach works well for animating articulated objects like animals, it has limited flexibility for animating less structured free-form objects. We introduce Wassersplines, a novel trajectory inference method for animating unstructured densities based on recent advances in continuous normalizing flows and optimal transport. The key idea is to train a neurally-parameterized velocity field that represents the motion between keyframes. Trajectories are then computed by advecting keyframes through the velocity field. We solve an additional Wasserstein barycenter interpolation problem to guarantee strict adherence to keyframes. Our tool can stylize trajectories through a variety of PDE-based regularizers to create different visual effects. We demonstrate our tool on various keyframe interpolation problems to produce temporally-coherent animations without meshing or rigging.Item MP-NeRF: Neural Radiance Fields for Dynamic Multi-person synthesis from Sparse Views(The Eurographics Association and John Wiley & Sons Ltd., 2022) Chao, Xian Jin; Leung, Howard; Dominik L. Michels; Soeren PirkMulti-person novel view synthesis aims to generate free-viewpoint videos for dynamic scenes of multiple persons. However, current methods require numerous views to reconstruct a dynamic person and only achieve good performance when only a single person is present in the video. This paper aims to reconstruct a multi-person scene with fewer views, especially addressing the occlusion and interaction problems that appear in the multi-person scene. We propose MP-NeRF, a practical method for multiperson novel view synthesis from sparse cameras without the pre-scanned template human models. We apply a multi-person SMPL template as the identity and human motion prior. Then we build a global latent code to integrate the relative observations among multiple people, so we could represent multiple dynamic people into multiple neural radiance representations from sparse views. Experiments on multi-person dataset MVMP show that our method is superior to other state-of-the-art methods.Item Combining Motion Matching and Orientation Prediction to Animate Avatars for Consumer-Grade VR Devices(The Eurographics Association and John Wiley & Sons Ltd., 2022) Ponton, Jose Luis; Yun, Haoran; Andujar, Carlos; Pelechano, Nuria; Dominik L. Michels; Soeren PirkThe animation of user avatars plays a crucial role in conveying their pose, gestures, and relative distances to virtual objects or other users. Self-avatar animation in immersive VR helps improve the user experience and provides a Sense of Embodiment. However, consumer-grade VR devices typically include at most three trackers, one at the Head Mounted Display (HMD), and two at the handheld VR controllers. Since the problem of reconstructing the user pose from such sparse data is ill-defined, especially for the lower body, the approach adopted by most VR games consists of assuming the body orientation matches that of the HMD, and applying animation blending and time-warping from a reduced set of animations. Unfortunately, this approach produces noticeable mismatches between user and avatar movements. In this work we present a new approach to animate user avatars that is suitable for current mainstream VR devices. First, we use a neural network to estimate the user's body orientation based on the tracking information from the HMD and the hand controllers. Then we use this orientation together with the velocity and rotation of the HMD to build a feature vector that feeds a Motion Matching algorithm. We built a MoCap database with animations of VR users wearing a HMD and used it to test our approach on both self-avatars and other users' avatars. Our results show that our system can provide a large variety of lower body animations while correctly matching the user orientation, which in turn allows us to represent not only forward movements but also stepping in any direction.Item Context-based Style Transfer of Tokenized Gestures(The Eurographics Association and John Wiley & Sons Ltd., 2022) Kuriyama, Shigeru; Mukai, Tomohiko; Taketomi, Takafumi; Mukasa, Tomoyuki; Dominik L. Michels; Soeren PirkGestural animations in the amusement or entertainment field often require rich expressions; however, it is still challenging to synthesize characteristic gestures automatically. Although style transfer based on a neural network model is a potential solution, existing methods mainly focus on cyclic motions such as gaits and require re-training in adding new motion styles. Moreover, their per-pose transformation cannot consider the time-dependent features, and therefore motion styles of different periods and timings are difficult to be transferred. This limitation is fatal for the gestural motions requiring complicated time alignment due to the variety of exaggerated or intentionally performed behaviors. This study introduces a context-based style transfer of gestural motions with neural networks to ensure stable conversion even for exaggerated, dynamically complicated gestures. We present a model based on a vision transformer for transferring gestures' content and style features by time-segmenting them to compose tokens in a latent space. We extend this model to yield the probability of swapping gestures' tokens for style-transferring. A transformer model is suited to semantically consistent matching among gesture tokens, owing to the correlation with spoken words. The compact architecture of our network model requires only a small number of parameters and computational costs, which is suitable for real-time applications with an ordinary device. We introduce loss functions provided by the restoration error of identically and cyclically transferred gesture tokens and the similarity losses of content and style evaluated by splicing features inside the transformer. This design of losses allows unsupervised and zero-shot learning, by which the scalability for motion data is obtained. We comparatively evaluated our style transfer method, mainly focusing on expressive gestures using our dataset captured for various scenarios and styles by introducing new error metrics tailored for gestures. Our experiment showed the superiority of our method in numerical accuracy and stability of style transfer against the existing methods.Item Interaction Mix and Match: Synthesizing Close Interaction using Conditional Hierarchical GAN with Multi-Hot Class Embedding(The Eurographics Association and John Wiley & Sons Ltd., 2022) Goel, Aman; Men, Qianhui; Ho, Edmond S. L.; Dominik L. Michels; Soeren PirkSynthesizing multi-character interactions is a challenging task due to the complex and varied interactions between the characters. In particular, precise spatiotemporal alignment between characters is required in generating close interactions such as dancing and fighting. Existing work in generating multi-character interactions focuses on generating a single type of reactive motion for a given sequence which results in a lack of variety of the resultant motions. In this paper, we propose a novel way to create realistic human reactive motions which are not presented in the given dataset by mixing and matching different types of close interactions. We propose a Conditional Hierarchical Generative Adversarial Network with Multi-Hot Class Embedding to generate the Mix and Match reactive motions of the follower from a given motion sequence of the leader. Experiments are conducted on both noisy (depth-based) and high-quality (MoCap-based) interaction datasets. The quantitative and qualitative results show that our approach outperforms the state-of-the-art methods on the given datasets. We also provide an augmented dataset with realistic reactive motions to stimulate future research in this area.Item Voice2Face: Audio-driven Facial and Tongue Rig Animations with cVAEs(The Eurographics Association and John Wiley & Sons Ltd., 2022) Villanueva Aylagas, Monica; Anadon Leon, Hector; Teye, Mattias; Tollmar, Konrad; Dominik L. Michels; Soeren PirkWe present Voice2Face: a Deep Learning model that generates face and tongue animations directly from recorded speech. Our approach consists of two steps: a conditional Variational Autoencoder generates mesh animations from speech, while a separate module maps the animations to rig controller space. Our contributions include an automated method for speech style control, a method to train a model with data from multiple quality levels, and a method for animating the tongue. Unlike previous works, our model generates animations without speaker-dependent characteristics while allowing speech style control. We demonstrate through a user study that Voice2Face significantly outperforms a comparative state-of-the-art model in terms of perceived animation quality, and our quantitative evaluation suggests that Voice2Face yields more accurate lip closure in speech with bilabials through our speech style optimization. Both evaluations also show that our data quality conditioning scheme outperforms both an unconditioned model and a model trained with a smaller high-quality dataset. Finally, the user study shows a preference for animations including tongue. Results from our model can be seen at https://go.ea.com/voice2face.