42-Issue 1
Permanent URI for this collection
Browse
Browsing 42-Issue 1 by Subject "animation"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Detail‐Aware Deep Clothing Animations Infused with Multi‐Source Attributes(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Li, T.; Shi, R.; Kanai, T.; Hauser, Helwig and Alliez, PierreThis paper presents a novel learning‐based clothing deformation method to generate rich and reasonable detailed deformations for garments worn by bodies of various shapes in various animations. In contrast to existing learning‐based methods, which require numerous trained models for different garment topologies or poses and are unable to easily realize rich details, we use a unified framework to produce high fidelity deformations efficiently and easily. Specifically, we first found that the fit between the garment and the body has an important impact on the degree of folds. We then designed an attribute parser to generate detail‐aware encodings and infused them into the graph neural network, therefore enhancing the discrimination of details under diverse attributes. Furthermore, to achieve better convergence and avoid overly smooth deformations, we proposed to reconstruct output to mitigate the complexity of the learning task. Experimental results show that our proposed deformation method achieves better performance over existing methods in terms of generalization ability and quality of details.Item Differentiable Depth for Real2Sim Calibration of Soft Body Simulations(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Arnavaz, K.; Nielsen, M. Kragballe; Kry, P. G.; Macklin, M.; Erleben, K.; Hauser, Helwig and Alliez, PierreIn this work, we present a novel approach for calibrating material model parameters for soft body simulations using real data. We use a fully differentiable pipeline, combining a differentiable soft body simulator and differentiable depth rendering, which permits fast gradient‐based optimizations. Our method requires no data pre‐processing, and minimal experimental set‐up, as we directly minimize the L2‐norm between raw LIDAR scans and rendered simulation states. In essence, we provide the first marker‐free approach for calibrating a soft‐body simulator to match observed real‐world deformations. Our approach is inexpensive as it solely requires a consumer‐level LIDAR sensor compared to acquiring a professional marker‐based motion capture system. We investigate the effects of different material parameterizations and evaluate convergence for parameter optimization in both single and multi‐material scenarios of varying complexity. Finally, we show that our set‐up can be extended to optimize for dynamic behaviour as well.Item Monolithic Friction and Contact Handling for Rigid Bodies and Fluids Using SPH(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Probst, T.; Teschner, M.; Hauser, Helwig and Alliez, PierreWe propose a novel monolithic pure SPH formulation to simulate fluids strongly coupled with rigid bodies. This includes fluid incompressibility, fluid–rigid interface handling and rigid–rigid contact handling with a viable implicit particle‐based dry friction formulation. The resulting global system is solved using a new accelerated solver implementation that outperforms existing fluid and coupled rigid–fluid simulation approaches. We compare results of our simulation method to analytical solutions, show performance evaluations of our solver and present a variety of new and challenging simulation scenarios.Item Remeshing‐free Graph‐based Finite Element Method for Fracture Simulation(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Mandal, A.; Chaudhuri, P.; Chaudhuri, S.; Hauser, Helwig and Alliez, PierreFracture produces new mesh fragments that introduce additional degrees of freedom in the system dynamics. Existing finite element method (FEM) based solutions suffer from increasing computational cost as the system matrix size increases. We solve this problem by presenting a graph‐based FEM model for fracture simulation that is remeshing‐free and easily scales to high‐resolution meshes. Our algorithm models fracture on the graph induced in a volumetric mesh with tetrahedral elements. We relabel the edges of the graph using a computed damage variable to initialize and propagate fracture. We prove that non‐linear, hyper‐elastic strain energy density is expressible entirely in terms of the edge lengths of the induced graph. This allows us to reformulate the system dynamics for the relabelled graph without changing the size of the system dynamics matrix and thus prevents the computational cost from blowing up. The fractured surface has to be reconstructed explicitly only for visualization purposes. We simulate standard laboratory experiments from structural mechanics and compare the results with corresponding real‐world experiments. We fracture objects made of a variety of brittle and ductile materials, and show that our technique offers stability and speed that is unmatched in current literature.Item ZeroEGGS: Zero‐shot Example‐based Gesture Generation from Speech(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Ghorbani, Saeed; Ferstl, Ylva; Holden, Daniel; Troje, Nikolaus F.; Carbonneau, Marc‐André; Hauser, Helwig and Alliez, PierreWe present ZeroEGGS, a neural network framework for speech‐driven gesture generation with zero‐shot style control by example. This means style can be controlled via only a short example motion clip, even for motion styles unseen during training. Our model uses a Variational framework to learn a style embedding, making it easy to modify style through latent space manipulation or blending and scaling of style embeddings. The probabilistic nature of our framework further enables the generation of a variety of outputs given the input, addressing the stochastic nature of gesture motion. In a series of experiments, we first demonstrate the flexibility and generalizability of our model to new speakers and styles. In a user study, we then show that our model outperforms previous state‐of‐the‐art techniques in naturalness of motion, appropriateness for speech, and style portrayal. Finally, we release a high‐quality dataset of full‐body gesture motion including fingers, with speech, spanning across 19 different styles. Our code and data are publicly available at .