Human Motion Synthesis and Control via Contextual Manifold Embedding
dc.contributor.author | Zeng, Rui | en_US |
dc.contributor.author | Dai, Ju | en_US |
dc.contributor.author | Bai, Junxuan | en_US |
dc.contributor.author | Pan, Junjun | en_US |
dc.contributor.author | Qin, Hong | en_US |
dc.contributor.editor | Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard | en_US |
dc.date.accessioned | 2021-10-14T10:05:37Z | |
dc.date.available | 2021-10-14T10:05:37Z | |
dc.date.issued | 2021 | |
dc.description.abstract | Modeling motion dynamics for precise and rapid control by deterministic data-driven models is challenging due to the natural randomness of human motion. To address it, we propose a novel framework for continuous motion control by probabilistic latent variable models. The control is implemented by recurrently querying between historical and target motion states rather than exact motion data. Our model takes a conditional encoder-decoder form in two stages. Firstly, we utilize Gaussian Process Latent Variable Model (GPLVM) to project motion poses to a compact latent manifold. Motion states could be clearly recognized by analyzing on the manifold, such as walking phase and forwarding velocity. Secondly, taking manifold as prior, a Recurrent Neural Network (RNN) encoder makes temporal latent prediction from the previous and control states. An attention module then morphs the prediction by measuring latent similarities to control states and predicted states, thus dynamically preserving contextual consistency. In the end, the GP decoder reconstructs motion states back to motion frames. Experiments on walking datasets show that our model is able to maintain motion states autoregressively while performing rapid and smooth transitions for the control. | en_US |
dc.description.sectionheaders | Fast Rendering and Movement | |
dc.description.seriesinformation | Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers | |
dc.identifier.doi | 10.2312/pg.20211383 | |
dc.identifier.isbn | 978-3-03868-162-5 | |
dc.identifier.pages | 25-30 | |
dc.identifier.uri | https://doi.org/10.2312/pg.20211383 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/pg20211383 | |
dc.publisher | The Eurographics Association | en_US |
dc.subject | Computing methodologies | |
dc.subject | Motion processing | |
dc.subject | Motion capture | |
dc.subject | Motion path planning | |
dc.subject | Learning latent representations | |
dc.title | Human Motion Synthesis and Control via Contextual Manifold Embedding | en_US |