Browsing by Author "Ho, Edmond S. L."
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Data‐Driven Crowd Motion Control With Multi‐Touch Gestures(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Shen, Yijun; Henry, Joseph; Wang, He; Ho, Edmond S. L.; Komura, Taku; Shum, Hubert P. H.; Chen, Min and Benes, BedrichControlling a crowd using multi‐touch devices appeals to the computer games and animation industries, as such devices provide a high‐dimensional control signal that can effectively define the crowd formation and movement. However, existing works relying on pre‐defined control schemes require the users to learn a scheme that may not be intuitive. We propose a data‐driven gesture‐based crowd control system, in which the control scheme is learned from example gestures provided by different users. In particular, we build a database with pairwise samples of gestures and crowd motions. To effectively generalize the gesture style of different users, such as the use of different numbers of fingers, we propose a set of gesture features for representing a set of hand gesture trajectories. Similarly, to represent crowd motion trajectories of different numbers of characters over time, we propose a set of crowd motion features that are extracted from a Gaussian mixture model. Given a run‐time gesture, our system extracts the nearest gestures from the database and interpolates the corresponding crowd motions in order to generate the run‐time control. Our system is accurate and efficient, making it suitable for real‐time applications such as real‐time strategy games and interactive animation controls.Item Emotion Transfer for 3D Hand Motion using StarGAN(The Eurographics Association, 2020) Chan, Jacky C. P.; Irimia, Ana-Sabina; Ho, Edmond S. L.; Ritsos, Panagiotis D. and Xu, KaiIn this paper, we propose a new data-driven framework for 3D hand motion emotion transfer. Specifically, we first capture highquality hand motion using VR gloves. The hand motion data is then annotated with the emotion type and converted to images to facilitate the motion synthesis process and the new dataset will be available to the public. To the best of our knowledge, this is the first public dataset with annotated hand motions. We further formulate the emotion transfer for 3D hand motion as an Image-to-Image translation problem, and it is done by adapting the StarGAN framework. Our new framework is able to synthesize new motions, given target emotion type and an unseen input motion. Experimental results show that our framework can produce high quality and consistent hand motions.Item Interaction Mix and Match: Synthesizing Close Interaction using Conditional Hierarchical GAN with Multi-Hot Class Embedding(The Eurographics Association and John Wiley & Sons Ltd., 2022) Goel, Aman; Men, Qianhui; Ho, Edmond S. L.; Dominik L. Michels; Soeren PirkSynthesizing multi-character interactions is a challenging task due to the complex and varied interactions between the characters. In particular, precise spatiotemporal alignment between characters is required in generating close interactions such as dancing and fighting. Existing work in generating multi-character interactions focuses on generating a single type of reactive motion for a given sequence which results in a lack of variety of the resultant motions. In this paper, we propose a novel way to create realistic human reactive motions which are not presented in the given dataset by mixing and matching different types of close interactions. We propose a Conditional Hierarchical Generative Adversarial Network with Multi-Hot Class Embedding to generate the Mix and Match reactive motions of the follower from a given motion sequence of the leader. Experiments are conducted on both noisy (depth-based) and high-quality (MoCap-based) interaction datasets. The quantitative and qualitative results show that our approach outperforms the state-of-the-art methods on the given datasets. We also provide an augmented dataset with realistic reactive motions to stimulate future research in this area.