DragPoser: Motion Reconstruction from Variable Sparse Tracking Signals via Latent Space Optimization

dc.contributor.authorPonton, Jose Luisen_US
dc.contributor.authorPujol, Eduarden_US
dc.contributor.authorAristidou, Andreasen_US
dc.contributor.authorAndujar, Carlosen_US
dc.contributor.authorPelechano, Nuriaen_US
dc.contributor.editorBousseau, Adrienen_US
dc.contributor.editorDay, Angelaen_US
dc.date.accessioned2025-05-09T09:11:26Z
dc.date.available2025-05-09T09:11:26Z
dc.date.issued2025
dc.description.abstractHigh-quality motion reconstruction that follows the user's movements can be achieved by high-end mocap systems with many sensors. However, obtaining such animation quality with fewer input devices is gaining popularity as it brings mocap closer to the general public. The main challenges include the loss of end-effector accuracy in learning-based approaches, or the lack of naturalness and smoothness in IK-based solutions. In addition, such systems are often finely tuned to a specific number of trackers and are highly sensitive to missing data, e.g., in scenarios where a sensor is occluded or malfunctions. In response to these challenges, we introduce DragPoser, a novel deep-learning-based motion reconstruction system that accurately represents hard and dynamic constraints, attaining real-time high end-effectors position accuracy. This is achieved through a pose optimization process within a structured latent space. Our system requires only one-time training on a large human motion dataset, and then constraints can be dynamically defined as losses, while the pose is iteratively refined by computing the gradients of these losses within the latent space. To further enhance our approach, we incorporate a Temporal Predictor network, which employs a Transformer architecture to directly encode temporality within the latent space. This network ensures the pose optimization is confined to the manifold of valid poses and also leverages past pose data to predict temporally coherent poses. Results demonstrate that DragPoser surpasses both IK-based and the latest data-driven methods in achieving precise end-effector positioning, while it produces natural poses and temporally coherent motion. In addition, our system showcases robustness against on-the-fly constraint modifications, and exhibits adaptability to various input configurations and changes. The complete source code, trained model, animation databases, and supplementary material used in this paper can be found at https://upc-virvig.github.io/DragPoseren_US
dc.description.number2
dc.description.sectionheadersBringing Motion to Life: Motion Reconstruction and Control
dc.description.seriesinformationComputer Graphics Forum
dc.description.volume44
dc.identifier.doi10.1111/cgf.70026
dc.identifier.issn1467-8659
dc.identifier.pages14 pages
dc.identifier.urihttps://doi.org/10.1111/cgf.70026
dc.identifier.urihttps://diglib.eg.org/handle/10.1111/cgf70026
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.rightsAttribution-NonCommercial 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by-nc/4.0/
dc.subjectCCS Concepts: Computing methodologies → Motion capture; Motion processing; Animation; Learning paradigms
dc.subjectComputing methodologies → Motion capture
dc.subjectMotion processing
dc.subjectAnimation
dc.subjectLearning paradigms
dc.titleDragPoser: Motion Reconstruction from Variable Sparse Tracking Signals via Latent Space Optimizationen_US
Files
Original bundle
Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
cgf70026.pdf
Size:
7.57 MB
Format:
Adobe Portable Document Format
No Thumbnail Available
Name:
paper1046_1.mp4
Size:
40.86 MB
Format:
Video MP4