43-Issue 2
Permanent URI for this collection
Browse
Browsing 43-Issue 2 by Issue Date
Now showing 1 - 20 of 54
Results Per Page
Sort Options
Item Computational Smocking through Fabric-Thread Interaction(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhou, Ningfeng; Ren, Jing; Sorkine-Hornung, Olga; Bermano, Amit H.; Kalogerakis, EvangelosWe formalize Italian smocking, an intricate embroidery technique that gathers flat fabric into pleats along meandering lines of stitches, resulting in pleats that fold and gather where the stitching veers. In contrast to English smocking, characterized by colorful stitches decorating uniformly shaped pleats, and Canadian smocking, which uses localized knots to form voluminous pleats, Italian smocking permits the fabric to move freely along the stitched threads following curved paths, resulting in complex and unpredictable pleats with highly diverse, irregular structures, achieved simply by pulling on the threads. We introduce a novel method for digital previewing of Italian smocking results, given the thread stitching path as input. Our method uses a coarse-grained mass-spring system to simulate the interaction between the threads and the fabric. This configuration guides the fine-level fabric deformation through an adaptation of the state-of-the-art simulator, C-IPC [LKJ21]. Our method models the general problem of fabric-thread interaction and can be readily adapted to preview Canadian smocking as well.We compare our results to baseline approaches and physical fabrications to demonstrate the accuracy of our method.Item Cinematographic Camera Diffusion Model(The Eurographics Association and John Wiley & Sons Ltd., 2024) Jiang, Hongda; Wang, Xi; Christie, Marc; Liu, Libin; Chen, Baoquan; Bermano, Amit H.; Kalogerakis, EvangelosDesigning effective camera trajectories in virtual 3D environments is a challenging task even for experienced animators. Despite an elaborate film grammar, forged through years of experience, that enables the specification of camera motions through cinematographic properties (framing, shots sizes, angles, motions), there are endless possibilities in deciding how to place and move cameras with characters. Dealing with these possibilities is part of the complexity of the problem. While numerous techniques have been proposed in the literature (optimization-based solving, encoding of empirical rules, learning from real examples,...), the results either lack variety or ease of control. In this paper, we propose a cinematographic camera diffusion model using a transformer-based architecture to handle temporality and exploit the stochasticity of diffusion models to generate diverse and qualitative trajectories conditioned by high-level textual descriptions. We extend the work by integrating keyframing constraints and the ability to blend naturally between motions using latent interpolation, in a way to augment the degree of control of the designers. We demonstrate the strengths of this text-to-camera motion approach through qualitative and quantitative experiments and gather feedback from professional artists. The code and data are available at https://github.com/jianghd1996/Camera-control.Item Single-Image SVBRDF Estimation with Learned Gradient Descent(The Eurographics Association and John Wiley & Sons Ltd., 2024) Luo, Xuejiao; Scandolo, Leonardo; Bousseau, Adrien; Eisemann, Elmar; Bermano, Amit H.; Kalogerakis, EvangelosRecovering spatially-varying materials from a single photograph of a surface is inherently ill-posed, making the direct application of a gradient descent on the reflectance parameters prone to poor minima. Recent methods leverage deep learning either by directly regressing reflectance parameters using feed-forward neural networks or by learning a latent space of SVBRDFs using encoder-decoder or generative adversarial networks followed by a gradient-based optimization in latent space. The former is fast but does not account for the likelihood of the prediction, i.e., how well the resulting reflectance explains the input image. The latter provides a strong prior on the space of spatially-varying materials, but this prior can hinder the reconstruction of images that are too different from the training data. Our method combines the strengths of both approaches. We optimize reflectance parameters to best reconstruct the input image using a recurrent neural network, which iteratively predicts how to update the reflectance parameters given the gradient of the reconstruction likelihood. By combining a learned prior with a likelihood measure, our approach provides a maximum a posteriori estimate of the SVBRDF. Our evaluation shows that this learned gradient-descent method achieves state-of-the-art performance for SVBRDF estimation on synthetic and real images.Item Neural Denoising for Deep-Z Monte Carlo Renderings(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhang, Xianyao; Röthlin, Gerhard; Zhu, Shilin; Aydin, Tunç Ozan; Salehi, Farnood; Gross, Markus; Papas, Marios; Bermano, Amit H.; Kalogerakis, EvangelosWe present a kernel-predicting neural denoising method for path-traced deep-Z images that facilitates their usage in animation and visual effects production. Deep-Z images provide enhanced flexibility during compositing as they contain color, opacity, and other rendered data at multiple depth-resolved bins within each pixel. However, they are subject to noise, and rendering until convergence is prohibitively expensive. The current state of the art in deep-Z denoising yields objectionable artifacts, and current neural denoising methods are incapable of handling the variable number of depth bins in deep-Z images. Our method extends kernel-predicting convolutional neural networks to address the challenges stemming from denoising deep-Z images. We propose a hybrid reconstruction architecture that combines the depth-resolved reconstruction at each bin with the flattened reconstruction at the pixel level. Moreover, we propose depth-aware neighbor indexing of the depth-resolved inputs to the convolution and denoising kernel application operators, which reduces artifacts caused by depth misalignment present in deep-Z images. We evaluate our method on a production-quality deep-Z dataset, demonstrating significant improvements in denoising quality and performance compared to the current state-of-the-art deep-Z denoiser. By addressing the significant challenge of the cost associated with rendering path-traced deep-Z images, we believe that our approach will pave the way for broader adoption of deep-Z workflows in future productions.Item Real-time Neural Rendering of Dynamic Light Fields(The Eurographics Association and John Wiley & Sons Ltd., 2024) Coomans, Arno; Dominici, Edoardo Alberto; Döring, Christian; Mueller, Joerg H.; Hladky, Jozef; Steinberger, Markus; Bermano, Amit H.; Kalogerakis, EvangelosSynthesising high-quality views of dynamic scenes via path tracing is prohibitively expensive. Although caching offline-quality global illumination in neural networks alleviates this issue, existing neural view synthesis methods are limited to mainly static scenes, have low inference performance or do not integrate well with existing rendering paradigms. We propose a novel neural method that is able to capture a dynamic light field, renders at real-time frame rates at 1920x1080 resolution and integrates seamlessly with Monte Carlo ray tracing frameworks. We demonstrate how a combination of spatial, temporal and a novel surface-space encoding are each effective at capturing different kinds of spatio-temporal signals. Together with a compact fully-fused neural network and architectural improvements, we achieve a twenty-fold increase in network inference speed compared to related methods at equal or better quality. Our approach is suitable for providing offline-quality real-time rendering in a variety of scenarios, such as free-viewpoint video, interactive multi-view rendering, or streaming rendering. Finally, our work can be integrated into other rendering paradigms, e.g., providing a dynamic background for interactive scenarios where the foreground is rendered with traditional methods.Item Wavelet Potentials: An Efficient Potential Recovery Technique for Pointwise Incompressible Fluids(The Eurographics Association and John Wiley & Sons Ltd., 2024) Lyu, Luan; Ren, Xiaohua; Cao, Wei; Zhu, Jian; Wu, Enhua; Yang, Zhi-Xin; Bermano, Amit H.; Kalogerakis, EvangelosWe introduce an efficient technique for recovering the vector potential in wavelet space to simulate pointwise incompressible fluids. This technique ensures that fluid velocities remain divergence-free at any point within the fluid domain and preserves local volume during the simulation. Divergence-free wavelets are utilized to calculate the wavelet coefficients of the vector potential, resulting in a smooth vector potential with enhanced accuracy, even when the input velocities exhibit some degree of divergence. This enhanced accuracy eliminates the need for additional computational time to achieve a specific accuracy threshold, as fewer iterations are required for the pressure Poisson solver. Additionally, in 3D, since the wavelet transform is taken in-place, only the memory for storing the vector potential is required. These two features make the method remarkably efficient for recovering vector potential for fluid simulation. Furthermore, the method can handle various boundary conditions during the wavelet transform, making it adaptable for simulating fluids with Neumann and Dirichlet boundary conditions. Our approach is highly parallelizable and features a time complexity of O(n), allowing for seamless deployment on GPUs and yielding remarkable computational efficiency. Experiments demonstrate that, taking into account the time consumed by the pressure Poisson solver, the method achieves an approximate 2x speedup on GPUs compared to state-of-the-art vector potential recovery techniques while maintaining a precision level of 10-6 when single float precision is employed. The source code of ’'Wavelet Potentials' can be found in https://github.com/yours321dog/WaveletPotentials.Item DivaTrack: Diverse Bodies and Motions from Acceleration-Enhanced 3-Point Trackers(The Eurographics Association and John Wiley & Sons Ltd., 2024) Yang, Dongseok; Kang, Jiho; Ma, Lingni; Greer, Joseph; Ye, Yuting; Lee, Sung-Hee; Bermano, Amit H.; Kalogerakis, EvangelosFull-body avatar presence is important for immersive social and environmental interactions in digital reality. However, current devices only provide three six degrees of freedom (DOF) poses from the headset and two controllers (i.e. three-point trackers). Because it is a highly under-constrained problem, inferring full-body pose from these inputs is challenging, especially when supporting the full range of body proportions and use cases represented by the general population. In this paper, we propose a deep learning framework, DivaTrack, which outperforms existing methods when applied to diverse body sizes and activities. We augment the sparse three-point inputs with linear accelerations from Inertial Measurement Units (IMU) to improve foot contact prediction. We then condition the otherwise ambiguous lower-body pose with the predictions of foot contact and upper-body pose in a two-stage model. We further stabilize the inferred full-body pose in a wide range of configurations by learning to blend predictions that are computed in two reference frames, each of which is designed for different types of motions. We demonstrate the effectiveness of our design on a large dataset that captures 22 subjects performing challenging locomotion for three-point tracking, including lunges, hula-hooping, and sitting. As shown in a live demo using the Meta VR headset and Xsens IMUs, our method runs in real-time while accurately tracking a user's motion when they perform a diverse set of movements.Item Real-Time Underwater Spectral Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2024) Monzon, Nestor; Gutierrez, Diego; Akkaynak, Derya; Muñoz, Adolfo; Bermano, Amit H.; Kalogerakis, EvangelosThe light field in an underwater environment is characterized by complex multiple scattering interactions and wavelengthdependent attenuation, requiring significant computational resources for the simulation of underwater scenes. We present a novel approach that makes it possible to simulate multi-spectral underwater scenes, in a physically-based manner, in real time. Our key observation is the following: In the vertical direction, the steady decay in irradiance as a function of depth is characterized by the diffuse downwelling attenuation coefficient, which oceanographers routinely measure for different types of waters. We rely on a database of such real-world measurements to obtain an analytical approximation to the Radiative Transfer Equation, allowing for real-time spectral rendering with results comparable to Monte Carlo ground-truth references, in a fraction of the time. We show results simulating underwater appearance for the different optical water types, including volumetric shadows and dynamic, spatially varying lighting near the water surface.Item Perceptual Quality Assessment of NeRF and Neural View Synthesis Methods for Front-Facing Views(The Eurographics Association and John Wiley & Sons Ltd., 2024) Liang, Hanxue; Wu, Tianhao; Hanji, Param; Banterle, Francesco; Gao, Hongyun; Mantiuk, Rafal; Öztireli, Cengiz; Bermano, Amit H.; Kalogerakis, EvangelosNeural view synthesis (NVS) is one of the most successful techniques for synthesizing free viewpoint videos, capable of achieving high fidelity from only a sparse set of captured images. This success has led to many variants of the techniques, each evaluated on a set of test views typically using image quality metrics such as PSNR, SSIM, or LPIPS. There has been a lack of research on how NVS methods perform with respect to perceived video quality. We present the first study on perceptual evaluation of NVS and NeRF variants. For this study, we collected two datasets of scenes captured in a controlled lab environment as well as in-the-wild. In contrast to existing datasets, these scenes come with reference video sequences, allowing us to test for temporal artifacts and subtle distortions that are easily overlooked when viewing only static images. We measured the quality of videos synthesized by several NVS methods in a well-controlled perceptual quality assessment experiment as well as with many existing state-of-the-art image/video quality metrics. We present a detailed analysis of the results and recommendations for dataset and metric selection for NVS evaluation.Item GLS-PIA: n-Dimensional Spherical B-Spline Curve Fitting based on Geodesic Least Square with Adaptive Knot Placement(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhao, Yuming; Wu, Zhongke; Wang, Xingce; Bermano, Amit H.; Kalogerakis, EvangelosDue to the widespread applications of curves on n-dimensional spheres, fitting curves on n-dimensional spheres has received increasing attention in recent years. However, due to the non-Euclidean nature of spheres, curve fitting methods on n-dimensional spheres often struggle to balance fitting accuracy and curve fairness. In this paper, we propose a new fitting framework, GLSPIA, for parameterized point sets on n-dimensional spheres to address the challenge. Meanwhile, we provide the proof of the method. Firstly, we propose a progressive iterative approximation method based on geodesic least squares which can directly optimize the geodesic least squares loss on the n-sphere, improving the accuracy of the fitting. Additionally, we use an error allocation method based on contribution coefficients to ensure the fairness of the fitting curve. Secondly, we propose an adaptive knot placement method based on geodesic difference to estimate a more reasonable distribution of control points in the parameter domain, placing more control points in areas with greater detail. This enables B-spline curves to capture more details with a limited number of control points. Experimental results demonstrate that our framework achieves outstanding performance, especially in handling imbalanced data points. (In this paper, ''sphere'' refers to n-sphere (n = 2) unless otherwise specified.)Item SENS: Part-Aware Sketch-based Implicit Neural Shape Modeling(The Eurographics Association and John Wiley & Sons Ltd., 2024) Binninger, Alexandre; Hertz, Amir; Sorkine-Hornung, Olga; Cohen-Or, Daniel; Giryes, Raja; Bermano, Amit H.; Kalogerakis, EvangelosWe present SENS, a novel method for generating and editing 3D models from hand-drawn sketches, including those of abstract nature. Our method allows users to quickly and easily sketch a shape, and then maps the sketch into the latent space of a partaware neural implicit shape architecture. SENS analyzes the sketch and encodes its parts into ViT patch encoding, subsequently feeding them into a transformer decoder that converts them to shape embeddings suitable for editing 3D neural implicit shapes. SENS provides intuitive sketch-based generation and editing, and also succeeds in capturing the intent of the user's sketch to generate a variety of novel and expressive 3D shapes, even from abstract and imprecise sketches. Additionally, SENS supports refinement via part reconstruction, allowing for nuanced adjustments and artifact removal. It also offers part-based modeling capabilities, enabling the combination of features from multiple sketches to create more complex and customized 3D shapes. We demonstrate the effectiveness of our model compared to the state-of-the-art using objective metric evaluation criteria and a user study, both indicating strong performance on sketches with a medium level of abstraction. Furthermore, we showcase our method's intuitive sketch-based shape editing capabilities, and validate it through a usability study.Item Real-Time Neural Materials using Block-Compressed Features(The Eurographics Association and John Wiley & Sons Ltd., 2024) Weinreich, Clément; Oliveira, Louis De; Houdard, Antoine; Nader, Georges; Bermano, Amit H.; Kalogerakis, EvangelosNeural materials typically consist of a collection of neural features along with a decoder network. The main challenge in integrating such models in real-time rendering pipelines lies in the large size required to store their features in GPU memory and the complexity of evaluating the network efficiently. We present a neural material model whose features and decoder are specifically designed to be used in real-time rendering pipelines. Our framework leverages hardware-based block compression (BC) texture formats to store the learned features and trains the model to output the material information continuously in space and scale. To achieve this, we organize the features in a block-based manner and emulate BC6 decompression during training, making it possible to export them as regular BC6 textures. This structure allows us to use high resolution features while maintaining a low memory footprint. Consequently, this enhances our model's overall capability, enabling the use of a lightweight and simple decoder architecture that can be evaluated directly in a shader. Furthermore, since the learned features can be decoded continuously, it allows for random uv sampling and smooth transition between scales without needing any subsequent filtering. As a result, our neural material has a small memory footprint, can be decoded extremely fast adding a minimal computational overhead to the rendering pipeline.Item Physically-based Analytical Erosion for fast Terrain Generation(The Eurographics Association and John Wiley & Sons Ltd., 2024) Tzathas, Petros; Gailleton, Boris; Steer, Philippe; Cordonnier, Guillaume; Bermano, Amit H.; Kalogerakis, EvangelosTerrain generation methods have long been divided between procedural and physically-based. Procedural methods build upon the fast evaluation of a mathematical function but suffer from a lack of geological consistency, while physically-based simulation enforces this consistency at the cost of thousands of iterations unraveling the history of the landscape. In particular, the simulation of the competition between tectonic uplift and fluvial erosion expressed by the stream power law raised recent interest in computer graphics as this allows the generation and control of consistent large-scale mountain ranges, albeit at the cost of a lengthy simulation. In this paper, we explore the analytical solutions of the stream power law and propose a method that is both physically-based and procedural, allowing fast and consistent large-scale terrain generation. In our approach, time is no longer the stopping criterion of an iterative process but acts as the parameter of a mathematical function, a slider that controls the aging of the input terrain from a subtle erosion to the complete replacement by a fully formed mountain range. While analytical solutions have been proposed by the geomorphology community for the 1D case, extending them to a 2D heightmap proves challenging. We propose an efficient implementation of the analytical solutions with a multigrid accelerated iterative process and solutions to incorporate landslides and hillslope processes – two erosion factors that complement the stream power law.Item CharacterMixer: Rig-Aware Interpolation of 3D Characters(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhan, Xiao; Fu, Rao; Ritchie, Daniel; Bermano, Amit H.; Kalogerakis, EvangelosWe present CharacterMixer, a system for blending two rigged 3D characters with different mesh and skeleton topologies while maintaining a rig throughout interpolation. CharacterMixer also enables interpolation during motion for such characters, a novel feature. Interpolation is an important shape editing operation, but prior methods have limitations when applied to rigged characters: they either ignore the rig (making interpolated characters no longer posable) or use a fixed rig and mesh topology. To handle different mesh topologies, CharacterMixer uses a signed distance field (SDF) representation of character shapes, with one SDF per bone. To handle different skeleton topologies, it computes a hierarchical correspondence between source and target character skeletons and interpolates the SDFs of corresponding bones. This correspondence also allows the creation of a single ''unified skeleton'' for posing and animating interpolated characters. We show that CharacterMixer produces qualitatively better interpolation results than two state-of-the-art methods while preserving a rig throughout interpolation. Project page: https://seanxzhan.github.io/projects/CharacterMixer.Item EUROGRAPHICS 2024: CGF 43-2 Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2024) Bermano, Amit H.; Kalogerakis, Evangelos; Bermano, Amit H.; Kalogerakis, EvangelosItem Practical Method to Estimate Fabric Mechanics from Metadata(The Eurographics Association and John Wiley & Sons Ltd., 2024) Dominguez-Elvira, Henar; Nicás, Alicia; Cirio, Gabriel; Rodríguez, Alejandro; Garces, Elena; Bermano, Amit H.; Kalogerakis, EvangelosEstimating fabric mechanical properties is crucial to create realistic digital twins. Existing methods typically require testing physical fabric samples with expensive devices or cumbersome capture setups. In this work, we propose a method to estimate fabric mechanics just from known manufacturer metadata such as the fabric family, the density, the composition, and the thickness. Further, to alleviate the need to know the fabric family –which might be ambiguous or unknown for nonspecialists– we propose an end-to-end neural method that works with planar images of the textile as input. We evaluate our methods using extensive tests that include the industry standard Cusick and demonstrate that both of them produce drapes that strongly correlate with the ground truth estimates provided by lab equipment. Our method is the first to propose such a simple capture method for mechanical properties outperforming other methods that require testing the fabric in specific setups.Item Physically Based Real-Time Rendering of Atmospheres using Mie Theory(The Eurographics Association and John Wiley & Sons Ltd., 2024) Schneegans, Simon; Meyran, Tim; Ginkel, Ingo; Zachmann, Gabriel; Gerndt, Andreas; Bermano, Amit H.; Kalogerakis, EvangelosMost real-time rendering models for atmospheric effects have been designed and optimized for Earth's atmosphere. Some authors have proposed approaches for rendering other atmospheres, but these methods still use approximations that are only valid on Earth. For instance, the iconic blue glow of Martian sunsets can not be represented properly as the complex interference effects of light scattered at dust particles can not be captured by these approximations. In this paper, we present an approach for generalizing an existing model to make it capable of rendering extraterrestrial atmospheres. This is done by replacing the approximations with a physical model based on Mie Theory. We use the particle-size distribution, the particle-density distribution as well as the wavelength-dependent refractive index of atmospheric particles as input. To demonstrate the feasibility of this idea, we extend the model by Bruneton et al. [BN08] and implement it into CosmoScout VR, an open-source visualization of our Solar System. In a first step, we use Mie Theory to precompute the scattering behaviour of a particle mixture. Then, multi-scattering is simulated, and finally the precomputation results are used for real-time rendering. We demonstrate that this not only improves the visualization of the Martian atmosphere, but also creates more realistic results for our own atmosphere.Item Surface-aware Mesh Texture Synthesis with Pre-trained 2D CNNs(The Eurographics Association and John Wiley & Sons Ltd., 2024) Kovács, Áron Samuel; Hermosilla, Pedro; Raidou, Renata Georgia; Bermano, Amit H.; Kalogerakis, EvangelosMesh texture synthesis is a key component in the automatic generation of 3D content. Existing learning-based methods have drawbacks-either by disregarding the shape manifold during texture generation or by requiring a large number of different views to mitigate occlusion-related inconsistencies. In this paper, we present a novel surface-aware approach for mesh texture synthesis that overcomes these drawbacks by leveraging the pre-trained weights of 2D Convolutional Neural Networks (CNNs) with the same architecture, but with convolutions designed for 3D meshes. Our proposed network keeps track of the oriented patches surrounding each texel, enabling seamless texture synthesis and retaining local similarity to classical 2D convolutions with square kernels. Our approach allows us to synthesize textures that account for the geometric content of mesh surfaces, eliminating discontinuities and achieving comparable quality to 2D image synthesis algorithms. We compare our approach with state-of-the-art methods where, through qualitative and quantitative evaluations, we demonstrate that our approach is more effective for a variety of meshes and styles, while also producing visually appealing and consistent textures on meshes.Item Interactive Exploration of Vivid Material Iridescence based on Bragg Mirrors(The Eurographics Association and John Wiley & Sons Ltd., 2024) Fourneau, Gary; Pacanowski, Romain; Barla, Pascal; Bermano, Amit H.; Kalogerakis, EvangelosMany animals, plants or gems exhibit iridescent material appearance in nature. These are due to specific geometric structures at scales comparable to visible wavelengths, yielding so-called structural colors. The most vivid examples are due to photonic crystals, where a same structure is repeated in one, two or three dimensions, augmenting the magnitude and complexity of interference effects. In this paper, we study the appearance of 1D photonic crystals (repetitive pairs of thin films), also called Bragg mirrors. Previous work has considered the effect of multiple thin films using the classical transfer matrix approach, which increases in complexity when the number of repetitions increases. Our first contribution is to introduce a more efficient closedform reflectance formula [Yeh88] for Bragg mirror reflectance to the Graphics community, as well as an approximation that lends itself to efficient spectral integration for RGB rendering. We then explore the appearance of stacks made of rough Bragg layers. Here our contribution is to show that they may lead to a ballistic transmission, significantly speeding up position-free rendering and leading to an efficient single-reflection BRDF model.Item Learning to Stabilize Faces(The Eurographics Association and John Wiley & Sons Ltd., 2024) Bednarik, Jan; Wood, Erroll; Choutas, Vassilis; Bolkart, Timo; Wang, Daoye; Wu, Chenglei; Beeler, Thabo; Bermano, Amit H.; Kalogerakis, EvangelosNowadays, it is possible to scan faces and automatically register them with high quality. However, the resulting face meshes often need further processing: we need to stabilize them to remove unwanted head movement. Stabilization is important for tasks like game development or movie making which require facial expressions to be cleanly separated from rigid head motion. Since manual stabilization is labor-intensive, there have been attempts to automate it. However, previous methods remain impractical: they either still require some manual input, produce imprecise alignments, rely on dubious heuristics and slow optimization, or assume a temporally ordered input. Instead, we present a new learning-based approach that is simple and fully automatic. We treat stabilization as a regression problem: given two face meshes, our network directly predicts the rigid transform between them that brings their skulls into alignment. We generate synthetic training data using a 3D Morphable Model (3DMM), exploiting the fact that 3DMM parameters separate skull motion from facial skin motion. Through extensive experiments we show that our approach outperforms the state-of-the-art both quantitatively and qualitatively on the tasks of stabilizing discrete sets of facial expressions as well as dynamic facial performances. Furthermore, we provide an ablation study detailing the design choices and best practices to help others adopt our approach for their own uses.
- «
- 1 (current)
- 2
- 3
- »