Volume 40 (2021)
Permanent URI for this community
Browse
Browsing Volume 40 (2021) by Subject "Animation"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Blending of Hyperbolic Closed Curves(The Eurographics Association and John Wiley & Sons Ltd., 2021) Ikemakhen, Aziz; Ahanchaou, Taoufik; Digne, Julie and Crane, KeenanIn recent years, game developers are interested in developing games in the hyperbolic space. Shape blending is one of the fundamental techniques to produce animation and videos games. This paper presents two algorithms for blending between two closed curves in the hyperbolic plane in a manner that guarantees that the intermediate curves are closed. We deal with hyperbolic discrete curves on Poincaré disc which is a famous model of the hyperbolic plane. We use the linear interpolation approach of the geometric invariants of hyperbolic polygons namely hyperbolic side lengths, exterior angles and geodesic discrete curvature. We formulate the closing condition of a hyperbolic polygon in terms of its geodesic side lengths and exterior angles. This is to be able to generate closed intermediate curves. Finally, some experimental results are given to illustrate that the proposed methods generate aesthetic blending of closed hyperbolic curves.Item Deep Learning-Based Unsupervised Human Facial Retargeting(The Eurographics Association and John Wiley & Sons Ltd., 2021) Kim, Seonghyeon; Jung, Sunjin; Seo, Kwanggyoon; Ribera, Roger Blanco i; Noh, Junyong; Zhang, Fang-Lue and Eisemann, Elmar and Singh, KaranTraditional approaches to retarget existing facial blendshape animations to other characters rely heavily on manually paired data including corresponding anchors, expressions, or semantic parametrizations to preserve the characteristics of the original performance. In this paper, inspired by recent developments in face swapping and reenactment, we propose a novel unsupervised learning method that reformulates the retargeting of 3D facial blendshape-based animations in the image domain. The expressions of a source model is transferred to a target model via the rendered images of the source animation. For this purpose, a reenactment network is trained with the rendered images of various expressions created by the source and target models in a shared latent space. The use of shared latent space enable an automatic cross-mapping obviating the need for manual pairing. Next, a blendshape prediction network is used to extract the blendshape weights from the translated image to complete the retargeting of the animation onto a 3D target model. Our method allows for fully unsupervised retargeting of facial expressions between models of different configurations, and once trained, is suitable for automatic real-time applications.Item Learning and Exploring Motor Skills with Spacetime Bounds(The Eurographics Association and John Wiley & Sons Ltd., 2021) Ma, Li-Ke; Yang, Zeshi; Tong, Xin; Guo, Baining; Yin, KangKang; Mitra, Niloy and Viola, IvanEquipping characters with diverse motor skills is the current bottleneck of physics-based character animation. We propose a Deep Reinforcement Learning (DRL) framework that enables physics-based characters to learn and explore motor skills from reference motions. The key insight is to use loose space-time constraints, termed spacetime bounds, to limit the search space in an early termination fashion. As we only rely on the reference to specify loose spacetime bounds, our learning is more robust with respect to low quality references. Moreover, spacetime bounds are hard constraints that improve learning of challenging motion segments, which can be ignored by imitation-only learning. We compare our method with state-of-the-art tracking-based DRL methods. We also show how to guide style exploration within the proposed framework.Item MultiResGNet: Approximating Nonlinear Deformation via Multi-Resolution Graphs(The Eurographics Association and John Wiley & Sons Ltd., 2021) Li, Tianxing; Shi, Rui; Kanai, Takashi; Mitra, Niloy and Viola, IvanThis paper presents a graph-learning-based, powerfully generalized method for automatically generating nonlinear deformation for characters with an arbitrary number of vertices. Large-scale character datasets with a significant number of poses are normally required for training to learn such automatic generalization tasks. There are two key contributions that enable us to address this challenge while making our network generalized to achieve realistic deformation approximation. First, after the automatic linear-based deformation step, we encode the roughly deformed meshes by constructing graphs where we propose a novel graph feature representation method with three descriptors to represent meshes of arbitrary characters in varying poses. Second, we design a multi-resolution graph network (MultiResGNet) that takes the constructed graphs as input, and end-to-end outputs the offset adjustments of each vertex. By processing multi-resolution graphs, general features can be better extracted, and the network training no longer heavily relies on large amounts of training data. Experimental results show that the proposed method achieves better performance than prior studies in deformation approximation for unseen characters and poses.