Browsing by Author "Zell, Eduard"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Compact Facial Landmark Layouts for Performance Capture(The Eurographics Association and John Wiley & Sons Ltd., 2022) Zell, Eduard; McDonnell, Rachel; Chaine, Raphaëlle; Kim, Min H.An abundance of older, as well as recent work exists at the intersection of computer vision and computer graphics on accurate estimation of dynamic facial landmarks with applications in facial animation, emotion recognition, and beyond. However, only a few publications exist that optimize the actual layout of facial landmarks to ensure an optimal trade-off between compact layouts and detailed capturing. At the same time, we observe that applications like social games prefer simplicity and performance over detail to reduce the computational budget especially on mobile devices. Other common attributes of such applications are predefined low-dimensional models to animate and a large, diverse user-base. In contrast to existing methods that focus on creating person-specific facial landmarks, we suggest to derive application-specific facial landmarks. We formulate our optimization method on the widely adopted blendshape model. First, a score is defined suitable to compute a characteristic landmark for each blendshape. In a following step, we optimize a global function, which mimics merging of similar landmarks to one. The optimization is solved in less than a second using integer linear programming and guarantees a globally optimal solution to an NP-hard problem. Our application-specific approach is faster and fundamentally different to previous, actor-specific methods. Resulting layouts are more similar to empirical layouts. Compared to empirical landmarks, our layouts require only a fraction of landmarks to achieve the same numerical error when reconstructing the animation from landmarks. The method is compared against previous work and tested on various blendshape models, representing a wide spectrum of applications.Item Expression Packing: As-Few-As-Possible Training Expressions for Blendshape Transfer(The Eurographics Association and John Wiley & Sons Ltd., 2020) Carrigan, Emma; Zell, Eduard; Guiard, Cedric; McDonnell, Rachel; Panozzo, Daniele and Assarsson, UlfTo simplify and accelerate the creation of blendshape rigs, using a template rig is a common procedure, especially during the creation of digital doubles. Blendshape transfer methods facilitate copy and paste functionality of the blendshapes from the template model to the digital double. However, for adequate personalization, such methods require a set of scanned training expressions of the original actor. So far, the semantics of the facial expressions to scan have been defined manually. In contrast, we formulate the semantics of the facial expressions as an integer optimization of the blendshape weights. By combining different blendshapes of the template model, our method creates facial expressions that serve as semantic references during scanning. Our method guarantees to compute as-few-as-possible training expressions with minimal overlap of activated blendshapes. If the number of training expressions is limited, blendshapes are selected based on their power to personalize the resulting blendshapes compared to generic blendshape transfer methods.Item From Perception to Interaction with Virtual Characters(The Eurographics Association, 2020) Zell, Eduard; Zibrek, Katja; Pan, Xueni; Gillies, Marco; McDonnell, Rachel; Fjeld, Morten and Frisvad, Jeppe RevallThis course will introduce students, researchers and digital artists to the recent results in perceptual research on virtual characters. It covers how technical and artistic aspects that constitute the appearance of a virtual character influence human perception, and how to create a plausibility illusion in interactive scenarios with virtual characters. We will report results of studies that addressed the influence of low-level cues like facial proportions, shading or level of detail and higher-level cues such as behavior or artistic stylization. We will place emphasis on aspects that are encountered during character development, animation, interaction design and achieving consistency between the visuals and storytelling. We will close with the relationship between verbal and non-verbal interaction and introduce some concepts which are important for creating convincing character behavior in virtual reality. The insights that we present in this course will serve as an additional toolset to anticipate the effect of certain design decisions and to create more convincing characters, especially in the case where budgets or time are limited.Item The Secret of Appeal - Understanding Perception of Realistic and Stylized Faces(Verlag Dr. Hut, 2018-07-16) Zell, EduardStylized characters are highly used in movies and games. Furthermore, stylization is mostly preferred over realism for the design of toys and social robots. However, the design process remains highly subjective because the influence of possible design choices on character perception is not well understood. Investigating the high-dimensional space of character stylization by means of perception experiments is difficult because creating and animating compelling characters of different stylization levels remains a challenging task. In this context, computer graphics algorithms enable the creation of highly controllable stimuli, simplifying examination of specific features that can strongly influence the overall perception of a character. This thesis is separated into two parts. First, a pipeline is presented for creating virtual doubles of real people. In addition, algorithms are described suitable for the transfer of surface properties and animation between faces of different stylization levels. With ElastiFace, a simple and versatile method is introduced for establishing dense correspondences between textured face models. The method extends non-rigid registration techniques to allow for strongly varying input geometries. The technical part closes with an algorithm that addresses the problem of animation transfer between faces. Such facial retargeting frameworks consist of a pre-processing step, where blendshapes are transferred from one face to another. By exploring the similarities between an expressive training sequence of an actor and the blendshapes of a facial rig to be animated, the accuracy of transferring the blendshapes to actor's proportions is highly improved. Consequently, this step overall enhances the reliability and quality of facial retargeting. The second part covers two different perception studies with stimuli created by using the previously described pipeline and algorithms. Results of both studies improve the understanding of the crucial factors for creating appealing characters across different stylization levels. The first study analyzes the most influential factors that define a character's appearance by using rating scales in four different perceptual experiments. In particular, it focuses on shape and material but considers as well shading, lighting and albedo. The study reveals that shape is the dominant factor when rating expression intensity and realism, while material is crucial for appeal. Furthermore, the results show that realism alone is a bad predictor for appeal, eeriness, or attractiveness. The second study investigates how various degrees of stylization are processed by the brain using event-related potentials (ERPs). Specifically, it focuses on the N170, early posterior negativity (EPN), and late positive potential (LPP) event-related components. The face-specific N170 shows a u-shaped modulation, with stronger reactions towards both, most abstract and most realistic compared to medium-stylized faces. In addition, LPP increases linearly with face realism, reflecting activity increase in the visual and parietal cortex for more realistic faces. Results reveal differential effects of face stylization on distinct face processing stages and suggest a perceptual basis to the uncanny valley hypothesis.Item ShadowPatch: Shadow Based Segmentation for Reliable Depth Discontinuities in Photometric Stereo(The Eurographics Association and John Wiley & Sons Ltd., 2022) Heep, Moritz; Zell, Eduard; Umetani, Nobuyuki; Wojtan, Chris; Vouga, EtiennePhotometric stereo is a well-established method with outstanding traits to recover surface details and material properties, like surface albedo or even specularity. However, while the surface is locally well-defined, computing absolute depth by integrating surface normals is notoriously difficult. Integration errors can be introduced and propagated by numerical inaccuracies from inter-reflection of light or non-Lambertian surfaces. But especially ignoring depth discontinuities for overlapping or disconnected objects, will introduce strong distortion artefacts. During the acquisition process the object is lit from different positions and self-shadowing is in general considered as an unavoidable drawback, complicating the numerical estimation of normals. However, we observe that shadow boundaries correlate strongly with depth discontinuities and exploit the visual structure introduced by self-shadowing to create a consistent image segmentation of continuous surfaces. In order to make depth estimation more robust, we deeply integrate photometric stereo with depth-from-stereo. Having obtained a shadow based segmentation of continuous surfaces, allows us to reduce the computational cost for correspondence search in depth-from-stereo. To speed-up computation further, we merge segments into larger meta-segments during an iterative depth optimization. The reconstruction error of our method is equal or smaller than previous work, and reconstruction results are characterized by robust handling of depth-discontinuities, without any smearing artifacts.Item Volumetric Video - Acquisition, Compression, Interaction and Perception(The Eurographics Association, 2021) Zell, Eduard; Castan, Fabien; Gasparini, Simone; Hilsmann, Anna; Kazhdan, Misha; Tagliasacchi, Andrea; Zarpalas, Dimitris; Zioulis, Nick; O'Sullivan, Carol and Schmalstieg, DieterVolumetric video, free-viewpoint video or 4D reconstruction refer to the process of reconstructing 3D content over time using a multi-view setup. This method is constantly gaining popularity both in research and industry. In fact, volumetric video is more and more considered to acquire dynamic photorealistic content instead of relying on traditional 3D content creation pipelines. The aim of the tutorial is to provide an overview of the entire volumetric video pipeline. Furthermore, it presents existing projects that may serve as a starting point to this topic at the intersection of computer vision and graphics. The first part of the tutorial will focus on the process of computing 3D models from captured videos. Topics will include content acquisition with affordable hardware, photogrammetry, and surface reconstruction from point clouds. A remarkable contribution of the presenters to the graphics community is that they will not only provide an overview of their topic but have in addition open sourced their implementations. Topics of the second part will focus on usage and distribution of volumetric video, including data compression, streaming or post-processing like pose-modification or seamless blending. The tutorial will conclude with an overview of perceptual studies focusing on quality assessment of 3D and 4D content.