Repository logo
  • Communities & Collections
  • All of DSpace
  • English
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Popa, Tiberiu"

Now showing 1 - 4 of 4
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Face Editing Using Part-Based Optimization of the Latent Space
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Aliari, Mohammad Amin; Beauchamp, Andre; Popa, Tiberiu; Paquette, Eric; Myszkowski, Karol; Niessner, Matthias
    We propose an approach for interactive 3D face editing based on deep generative models. Most of the current face modeling methods rely on linear methods and cannot express complex and non-linear deformations. In contrast to 3D morphable face models based on Principal Component Analysis (PCA), we introduce a novel architecture based on variational autoencoders. Our architecture has multiple encoders (one for each part of the face, such as the nose and mouth) which feed a single decoder. As a result, each sub-vector of the latent vector represents one part. We train our model with a novel loss function that further disentangles the space based on different parts of the face. The output of the network is a whole 3D face. Hence, unlike partbased PCA methods, our model learns to merge the parts intrinsically and does not require an additional merging process. To achieve interactive face modeling, we optimize for the latent variables given vertex positional constraints provided by a user. To avoid unwanted global changes elsewhere on the face, we only optimize the subset of the latent vector that corresponds to the part of the face being modified. Our editing optimization converges in less than a second. Our results show that the proposed approach supports a broader range of editing constraints and generates more realistic 3D faces.
  • No Thumbnail Available
    Item
    Knowledge-based Discovery of Transportation Object Properties by Fusing Multi-modal GIS Data
    (The Eurographics Association, 2018) Maroun, Pedro Eid; Mudur, Sudhir; Popa, Tiberiu; {Tam, Gary K. L. and Vidal, Franck
    3D models of transportation objects like a road, bridge, underpass, etc. are required in many domains including military training, land development, etc. While remote sensed images and LiDaR data can be used to create approximate 3D representations, detailed 3D representations are difficult to create automatically. Instead, interactive tools are used with rather laborious effort. For example, the top commercial interactive model generator we tried required 94 parameters in all for different bridge types. In this paper, we take a different path.We automatically derive these parameter values from GIS (Geographic Information Systems) data, which normally contains detailed information of these objects, but often only implicitly. The framework presented here transforms GIS data into a knowledge base consisting of assertions. Spatial/numeric relations are handled through plug-ins called property extractors whose results get added to the knowledge base, used by a reasoning engine to infer object properties. A number of properties have to be extracted from images, and are dependent on the accuracy of computer vision methods. While a comprehensive property extractor mechanism is work in progress, . a prototype implementation illustrates our framework for bridges with GIS data from the real world. To the best of our knowledge, our framework is the first to integrate knowledge inference and uncertainty for extracting landscape object properties by fusing facts from multi-modal GIS data sources.
  • Loading...
    Thumbnail Image
    Item
    Personalized Visual Dubbing through Virtual Dubber and Full Head Reenactment
    (The Eurographics Association, 2025) Jeon, Bobae; Paquette, Eric; Mudur, Sudhir; Popa, Tiberiu; Ceylan, Duygu; Li, Tzu-Mao
    Visual dubbing aims to modify facial expressions to ''lip-sync'' a new audio track. While person-generic talking head generation methods achieve expressive lip synchronization across arbitrary identities, they usually lack person-specific details and fail to generate high-quality results. Conversely, person-specific methods require extensive training. Our method combines the strengths of both methods by incorporating a virtual dubber, a person-generic talking head, as an intermediate representation. We then employ an autoencoder-based person-specific identity swapping network to transfer the actor identity, enabling fullhead reenactment that includes hair, face, ears, and neck. This eliminates artifacts while ensuring temporal consistency. Our quantitative and qualitative evaluation demonstrate that our method achieves a superior balance between lip-sync accuracy and realistic facial reenactment.
  • Loading...
    Thumbnail Image
    Item
    SCA 2020 CGF 39-8: Frontmatter
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Bender, Jan; Popa, Tiberiu; Bender, Jan and Popa, Tiberiu

Eurographics Association © 2013-2025  |  System hosted at Graz University of Technology      
DSpace software copyright © 2002-2025 LYRASIS

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback