Browsing by Author "Jin, Taeil"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item DAFNet: Generating Diverse Actions for Furniture Interaction by Learning Conditional Pose Distribution(The Eurographics Association and John Wiley & Sons Ltd., 2023) Jin, Taeil; Lee, Sung-Hee; Chaine, Raphaƫlle; Deng, Zhigang; Kim, Min H.We present DAFNet, a novel data-driven framework capable of generating various actions for indoor environment interactions. By taking desired root and upper-body poses as control inputs, DAFNet generates whole-body poses suitable for furniture of various shapes and combinations. To enable the generation of diverse actions, we introduce an action predictor that automatically infers the probabilities of individual action types based on the control input and environment. The action predictor is learned in an unsupervised manner by training Gaussian Mixture Variational Autoencoder (GMVAE). Additionally, we propose a two-part normalizing flow-based pose generator that sequentially generates upper and lower body poses. This two-part model improves motion quality and the accuracy of satisfying conditions over a single model generating the whole body. Our experiments show that DAFNet can create continuous character motion for indoor scene scenarios, and both qualitative and quantitative evaluations demonstrate the effectiveness of our framework.Item Interaction Motion Retargeting to Highly Dissimilar Furniture Environment(ACM, 2019) Jin, Taeil; Lee, Sung-Hee; Batty, Christopher and Huang, JinRetageting a human-environment interaction motion to a different environment remains as an important research topic in computer animation. This paper introduces a novel method that can retarget an interaction motion to highly dissimilar environment, where not every contact in the source environment can be preserved. The key idea of the method is to prioritize the contact and preserve more important contact while sacrificing other contacts if necessary. Specifically, we propose a method to detect a manipulation contact and preserve the contact in the target furniture environment by allowing for a large deviation from the input pose.Item MOVIN: Real-time Motion Capture using a Single LiDAR(The Eurographics Association and John Wiley & Sons Ltd., 2023) Jang, Deok-Kyeong; Yang, Dongseok; Jang, Deok-Yun; Choi, Byeoli; Jin, Taeil; Lee, Sung-Hee; Chaine, Raphaƫlle; Deng, Zhigang; Kim, Min H.Recent advancements in technology have brought forth new forms of interactive applications, such as the social metaverse, where end users interact with each other through their virtual avatars. In such applications, precise full-body tracking is essential for an immersive experience and a sense of embodiment with the virtual avatar. However, current motion capture systems are not easily accessible to end users due to their high cost, the requirement for special skills to operate them, or the discomfort associated with wearable devices. In this paper, we present MOVIN, the data-driven generative method for real-time motion capture with global tracking, using a single LiDAR sensor. Our autoregressive conditional variational autoencoder (CVAE) model learns the distribution of pose variations conditioned on the given 3D point cloud from LiDAR. As a central factor for high-accuracy motion capture, we propose a novel feature encoder to learn the correlation between the historical 3D point cloud data and global, local pose features, resulting in effective learning of the pose prior. Global pose features include root translation, rotation, and foot contacts, while local features comprise joint positions and rotations. Subsequently, a pose generator takes into account the sampled latent variable along with the features from the previous frame to generate a plausible current pose. Our framework accurately predicts the performer's 3D global information and local joint details while effectively considering temporally coherent movements across frames. We demonstrate the effectiveness of our architecture through quantitative and qualitative evaluations, comparing it against state-of-the-art methods. Additionally, we implement a real-time application to showcase our method in real-world scenarios. MOVIN dataset is available at https://movin3d. github.io/movin_pg2023/.