Eurographics Conferences
Permanent URI for this community
Browse
Browsing Eurographics Conferences by Subject "3D imaging"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Auto-rigging 3D Bipedal Characters in Arbitrary Poses(The Eurographics Association, 2021) Kim, Jeonghwan; Son, Hyeontae; Bae, Jinseok; Kim, Young Min; Theisel, Holger and Wimmer, MichaelWe present an end-to-end algorithm that can automatically rig a given 3D character such that it is ready for 3D animation. The animation of a virtual character requires the skeletal motion defined with bones and joints, and the corresponding deformation of the mesh represented with skin weights. While the conventional animation pipeline requires the initial 3D character to be in the predefined default pose, our pipeline can rig a 3D character in arbitrary pose. We handle the increased ambiguity by fixing the skeletal topology and solving for the full deformation space. After the skeletal positions and orientations are fully discovered, we can deform the provided 3D character into the default pose, from which we can animate the character with the help of recent motion-retargeting techniques. Our results show that we can successfully animate initially deformed characters, which was not possible with previous works.Item Exemplar Based Filtering of 2.5D Meshes of Faces(The Eurographics Association, 2018) Dihl, Leandro; Cruz, Leandro; Gonçalves, Nuno; Jain, Eakta and Kosinka, JiríIn this work, we present a content-aware filtering for 2.5D meshes of faces. We propose an exemplar-based filter that corrects each point of a given mesh through local model-exemplar neighborhood comparison. We take advantage of prior knowledge of the models (faces) to improve the comparison. We first detect facial feature points, and create the point correctors for regions of each feature, and only use the correspondent regions for correcting a point of the filtered mesh.Item Fast and Fine Disparity Reconstruction for Wide-baseline Camera Arrays with Deep Neural Networks(The Eurographics Association, 2022) Barrios, Théo; Gerhards, Julien; Prévost, Stéphanie; Loscos, Celine; Sauvage, Basile; Hasic-Telalovic, JasminkaRecently, disparity-based 3D reconstruction for stereo camera pairs and light field cameras have been greatly improved with the uprising of deep learning-based methods. However, only few of these approaches address wide-baseline camera arrays which require specific solutions. In this paper, we introduce a deep-learning based pipeline for multi-view disparity inference from images of a wide-baseline camera array. The network builds a low-resolution disparity map and retains the original resolution with an additional up scaling step. Our solution successfully answers to wide-baseline array configurations and infers disparity for full HD images at interactive times, while reducing quantification error compared to the state of the art.Item From Capture to Immersive Viewing of 3D HDR Point Clouds(The Eurographics Association, 2022) Loscos, Celine; Souchet, Philippe; Barrios, Théo; Valenzise, Giuseppe; Cozot, Rémi; Hahmann, Stefanie; Patow, Gustavo A.The collaborators of the ReVeRY project address the design of a specific grid of cameras, a cost-efficient system that acquires at once several viewpoints, possibly under several exposures and the converting of multiview, multiexposed, video stream into a high quality 3D HDR point cloud. In the last two decades, industries and researchers proposed significant advances in media content acquisition systems in three main directions: increase of resolution and image quality with the new ultra-high-definition (UHD) standard; stereo capture for 3D content; and high-dynamic range (HDR) imaging. Compression, representation, and interoperability of these new media are active research fields in order to reduce data size and be perceptually accurate. The originality of the project is to address both HDR and depth through the entire pipeline. Creativity is enhanced by several tools, which answer challenges at the different stages of the pipeline: camera setup, data processing, capture visualisation, virtual camera controller, compression, perceptually guided immersive visualisation. It is the experience acquired by the researchers of the project that is exposed in this tutorial.Item Improved Lighting Models for Facial Appearance Capture(The Eurographics Association, 2022) Xu, Yingyan; Riviere, Jérémy; Zoss, Gaspard; Chandran, Prashanth; Bradley, Derek; Gotardo, Paulo; Pelechano, Nuria; Vanderhaeghe, DavidFacial appearance capture techniques estimate geometry and reflectance properties of facial skin by performing a computationally intensive inverse rendering optimization in which one or more images are re-rendered a large number of times and compared to real images coming from multiple cameras. Due to the high computational burden, these techniques often make several simplifying assumptions to tame complexity and make the problem more tractable. For example, it is common to assume that the scene consists of only distant light sources, and ignore indirect bounces of light (on the surface and within the surface). Also, methods based on polarized lighting often simplify the light interaction with the surface and assume perfect separation of diffuse and specular reflectance. In this paper, we move in the opposite direction and demonstrate the impact on facial appearance capture quality when departing from these idealized conditions towards models that seek to more accurately represent the lighting, while at the same time minimally increasing computational burden. We compare the results obtained with a state-of-the-art appearance capture method [RGB*20], with and without our proposed improvements to the lighting model.Item State of the Art on Monocular 3D Face Reconstruction, Tracking, and Applications(The Eurographics Association and John Wiley & Sons Ltd., 2018) Zollhöfer, Michael; Thies, Justus; Garrido, Pablo; Bradley, Derek; Beeler, Thabo; Pérez, Patrick; Stamminger, Marc; Nießner, Matthias; Theobalt, Christian; Hildebrandt, Klaus and Theobalt, ChristianThe computer graphics and vision communities have dedicated long standing efforts in building computerized tools for reconstructing, tracking, and analyzing human faces based on visual input. Over the past years rapid progress has been made, which led to novel and powerful algorithms that obtain impressive results even in the very challenging case of reconstruction from a single RGB or RGB-D camera. The range of applications is vast and steadily growing as these technologies are further improving in speed, accuracy, and ease of use. Motivated by this rapid progress, this state-of-the-art report summarizes recent trends in monocular facial performance capture and discusses its applications, which range from performance-based animation to real-time facial reenactment. We focus our discussion on methods where the central task is to recover and track a three dimensional model of the human face using optimization-based reconstruction algorithms. We provide an in-depth overview of the underlying concepts of real-world image formation, and we discuss common assumptions and simplifications that make these algorithms practical. In addition, we extensively cover the priors that are used to better constrain the under-constrained monocular reconstruction problem, and discuss the optimization techniques that are employed to recover dense, photo-geometric 3D face models from monocular 2D data. Finally, we discuss a variety of use cases for the reviewed algorithms in the context of motion capture, facial animation, as well as image and video editing.