Robust Diffusion-based Motion In-betweening

dc.contributor.authorQin, Jiaen_US
dc.contributor.authorYan, Pengen_US
dc.contributor.authorAn, Boen_US
dc.contributor.editorChen, Renjieen_US
dc.contributor.editorRitschel, Tobiasen_US
dc.contributor.editorWhiting, Emilyen_US
dc.date.accessioned2024-10-13T18:10:00Z
dc.date.available2024-10-13T18:10:00Z
dc.date.issued2024
dc.description.abstractThe emergence of learning-based motion in-betweening techniques offers animators a more efficient way to animate characters. However, existing non-generative methods either struggle to support long transition generation or produce results that lack diversity. Meanwhile, diffusion models have shown promising results in synthesizing diverse and high-quality motions driven by text and keyframes. However, in these methods, keyframes often serve as a guide rather than a strict constraint and can sometimes be ignored when keyframes are sparse. To address these issues, we propose a lightweight yet effective diffusionbased motion in-betweening framework that generates animations conforming to keyframe constraints.We incorporate keyframe constraints into the training phase to enhance robustness in handling various constraint densities. Moreover, we employ relative positional encoding to improve the model's generalization on long range in-betweening tasks. This approach enables the model to learn from short animations while generating realistic in-betweening motions spanning thousands of frames. We conduct extensive experiments to validate our framework using the newly proposed metrics K-FID, K-Diversity, and K-Error, designed to evaluate generative in-betweening methods. Results demonstrate that our method outperforms existing diffusion-based methods across various lengths and keyframe densities. We also show that our method can be applied to text-driven motion synthesis, offering fine-grained control over the generated results.en_US
dc.description.number7
dc.description.sectionheadersHuman II
dc.description.seriesinformationComputer Graphics Forum
dc.description.volume43
dc.identifier.doi10.1111/cgf.15260
dc.identifier.issn1467-8659
dc.identifier.pages11 pages
dc.identifier.urihttps://doi.org/10.1111/cgf.15260
dc.identifier.urihttps://diglib.eg.org/handle/10.1111/cgf15260
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Computing methodologies → Motion capture; Neural networks
dc.subjectComputing methodologies → Motion capture
dc.subjectNeural networks
dc.titleRobust Diffusion-based Motion In-betweeningen_US
Files
Original bundle
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
cgf15260.pdf
Size:
8.93 MB
Format:
Adobe Portable Document Format
No Thumbnail Available
Name:
paper1455_mm.mp4
Size:
42.55 MB
Format:
Video MP4
Collections