Repository logo
  • Communities & Collections
  • All of DSpace
  • English
  • ÄŒeÅ¡tina
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • LatvieÅ¡u
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Fischer, Klaus"

Now showing 1 - 2 of 2
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Fine-Grained Semantic Segmentation of Motion Capture Data using Dilated Temporal Fully-Convolutional Networks
    (The Eurographics Association, 2019) Cheema, Noshaba; hosseini, somayeh; Sprenger, Janis; Herrmann, Erik; Du, Han; Fischer, Klaus; Slusallek, Philipp; Cignoni, Paolo and Miguel, Eder
    Human motion capture data has been widely used in data-driven character animation. In order to generate realistic, naturallooking motions, most data-driven approaches require considerable efforts of pre-processing, including motion segmentation and annotation. Existing (semi-) automatic solutions either require hand-crafted features for motion segmentation or do not produce the semantic annotations required for motion synthesis and building large-scale motion databases. In addition, human labeled annotation data suffers from inter- and intra-labeler inconsistencies by design. We propose a semi-automatic framework for semantic segmentation of motion capture data based on supervised machine learning techniques. It first transforms a motion capture sequence into a ''motion image'' and applies a convolutional neural network for image segmentation. Dilated temporal convolutions enable the extraction of temporal information from a large receptive field. Our model outperforms two state-of-the-art models for action segmentation, as well as a popular network for sequence modeling. Most of all, our method is very robust under noisy and inaccurate training labels and thus can handle human errors during the labeling process.
  • Loading...
    Thumbnail Image
    Item
    Stylistic Locomotion Modeling with Conditional Variational Autoencoder
    (The Eurographics Association, 2019) Du, Han; Herrmann, Erik; Sprenger, Janis; Cheema, Noshaba; hosseini, somayeh; Fischer, Klaus; Slusallek, Philipp; Cignoni, Paolo and Miguel, Eder
    We propose a novel approach to create generative models for distinctive stylistic locomotion synthesis. The approach is inspired by the observation that human styles can be easily distinguished from a few examples. However, learning a generative model for natural human motions which display huge amounts of variations and randomness would require a lot of training data. Furthermore, it would require considerable efforts to create such a large motion database for each style. We propose a generative model to combine the large variation in a neutral motion database and style information from a limited number of examples. We formulate the stylistic motion modeling task as a conditional distribution learning problem. Style transfer is implicitly applied during the model learning process. A conditional variational autoencoder (CVAE) is applied to learn the distribution and stylistic examples are used as constraints. We demonstrate that our approach can generate any number of natural-looking human motions with a similar style to the target given a few style examples and a neutral motion database.

Eurographics Association © 2013-2025  |  System hosted at Graz University of Technology      
DSpace software copyright © 2002-2025 LYRASIS

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback