Browsing by Author "Habermann, Marc"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item HDHumans: A Hybrid Approach for High-fidelity Digital Humans(ACM Association for Computing Machinery, 2023) Habermann, Marc; Liu, Lingjie; Xu, Weipeng; Pons-Moll, Gerard; Zollhoefer, Michael; Theobalt, Christian; Wang, Huamin; Ye, Yuting; Victor ZordanPhoto-real digital human avatars are of enormous importance in graphics, as they enable immersive communication over the globe, improve gaming and entertainment experiences, and can be particularly beneficial for AR and VR settings. However, current avatar generation approaches either fall short in high-fidelity novel view synthesis, generalization to novel motions, reproduction of loose clothing, or they cannot render characters at the high resolution offered by modern displays. To this end, we propose HDHumans, which is the first method for HD human character synthesis that jointly produces an accurate and temporally coherent 3D deforming surface and highly photo-realistic images of arbitrary novel views and of motions not seen at training time. At the technical core, our method tightly integrates a classical deforming character template with neural radiance fields (NeRF). Our method is carefully designed to achieve a synergy between classical surface deformation and a NeRF. First, the template guides the NeRF, which allows synthesizing novel views of a highly dynamic and articulated character and even enables the synthesis of novel motions. Second, we also leverage the dense pointclouds resulting from the NeRF to further improve the deforming surface via 3D-to-3D supervision. We outperform the state of the art quantitatively and qualitatively in terms of synthesis quality and resolution, as well as the quality of 3D surface reconstruction.Item State of the Art in Dense Monocular Non-Rigid 3D Reconstruction(The Eurographics Association and John Wiley & Sons Ltd., 2023) Tretschk, Edith; Kairanda, Navami; B R, Mallikarjun; Dabral, Rishabh; Kortylewski, Adam; Egger, Bernhard; Habermann, Marc; Fua, Pascal; Theobalt, Christian; Golyanik, Vladislav; Bousseau, Adrien; Theobalt, Christian3D reconstruction of deformable (or non-rigid) scenes from a set of monocular 2D image observations is a long-standing and actively researched area of computer vision and graphics. It is an ill-posed inverse problem, since-without additional prior assumptions-it permits infinitely many solutions leading to accurate projection to the input 2D images. Non-rigid reconstruction is a foundational building block for downstream applications like robotics, AR/VR, or visual content creation. The key advantage of using monocular cameras is their omnipresence and availability to the end users as well as their ease of use compared to more sophisticated camera set-ups such as stereo or multi-view systems. This survey focuses on state-of-the-art methods for dense non-rigid 3D reconstruction of various deformable objects and composite scenes from monocular videos or sets of monocular views. It reviews the fundamentals of 3D reconstruction and deformation modeling from 2D image observations. We then start from general methods-that handle arbitrary scenes and make only a few prior assumptions-and proceed towards techniques making stronger assumptions about the observed objects and types of deformations (e.g. human faces, bodies, hands, and animals). A significant part of this STAR is also devoted to classification and a high-level comparison of the methods, as well as an overview of the datasets for training and evaluation of the discussed techniques. We conclude by discussing open challenges in the field and the social aspects associated with the usage of the reviewed methods.