40-Issue 6
Permanent URI for this collection
Browse
Browsing 40-Issue 6 by Title
Now showing 1 - 20 of 31
Results Per Page
Sort Options
Item Action Unit Driven Facial Expression Synthesis from a Single Image with Patch Attentive GAN(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Zhao, Yong; Yang, Le; Pei, Ercheng; Oveneke, Meshia Cédric; Alioscha‐Perez, Mitchel; Li, Longfei; Jiang, Dongmei; Sahli, Hichem; Benes, Bedrich and Hauser, HelwigRecent advances in generative adversarial networks (GANs) have shown tremendous success for facial expression generation tasks. However, generating vivid and expressive facial expressions at Action Units (AUs) level is still challenging, due to the fact that automatic facial expression analysis for AU intensity itself is an unsolved difficult task. In this paper, we propose a novel synthesis‐by‐analysis approach by leveraging the power of GAN framework and state‐of‐the‐art AU detection model to achieve better results for AU‐driven facial expression generation. Specifically, we design a novel discriminator architecture by modifying the patch‐attentive AU detection network for AU intensity estimation and combine it with a global image encoder for adversarial learning to force the generator to produce more expressive and realistic facial images. We also introduce a balanced sampling approach to alleviate the imbalanced learning problem for AU synthesis. Extensive experimental results on DISFA and DISFA+ show that our approach outperforms the state‐of‐the‐art in terms of photo‐realism and expressiveness of the facial expression quantitatively and qualitatively.Item Customized Summarizations of Visual Data Collections(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Yuan, Mengke; Ghanem, Bernard; Yan, Dong‐Ming; Wu, Baoyuan; Zhang, Xiaopeng; Wonka, Peter; Benes, Bedrich and Hauser, HelwigWe propose a framework to generate customized summarizations of visual data collections, such as collections of images, materials, 3D shapes, and 3D scenes. We assume that the elements in the visual data collections can be mapped to a set of vectors in a feature space, in which a fitness score for each element can be defined, and we pose the problem of customized summarizations as selecting a subset of these elements. We first describe the design choices a user should be able to specify for modeling customized summarizations and propose a corresponding user interface. We then formulate the problem as a constrained optimization problem with binary variables and propose a practical and fast algorithm based on the alternating direction method of multipliers (ADMM). Our results show that our problem formulation enables a wide variety of customized summarizations, and that our solver is both significantly faster than state‐of‐the‐art commercial integer programming solvers and produces better solutions than fast relaxation‐based solvers.Item Deep Neural Models for Illumination Estimation and Relighting: A Survey(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Einabadi, Farshad; Guillemaut, Jean‐Yves; Hilton, Adrian; Benes, Bedrich and Hauser, HelwigScene relighting and estimating illumination of a real scene for insertion of virtual objects in a mixed‐reality scenario are well‐studied challenges in the computer vision and graphics fields. Classical inverse rendering approaches aim to decompose a scene into its orthogonal constituting elements, namely scene geometry, illumination and surface materials, which can later be used for augmented reality or to render new images under novel lighting or viewpoints. Recently, the application of deep neural computing to illumination estimation, relighting and inverse rendering has shown promising results. This contribution aims to bring together in a coherent manner current advances in this conjunction. We examine in detail the attributes of the proposed approaches, presented in three categories: scene illumination estimation, relighting with reflectance‐aware scene‐specific representations and finally relighting as image‐to‐image transformations. Each category is concluded with a discussion on the main characteristics of the current methods and possible future trends. We also provide an overview of current publicly available datasets for neural lighting applications.Item Deep Reflectance Scanning: Recovering Spatially‐varying Material Appearance from a Flash‐lit Video Sequence(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Ye, Wenjie; Dong, Yue; Peers, Pieter; Guo, Baining; Benes, Bedrich and Hauser, HelwigIn this paper we present a novel method for recovering high‐resolution spatially‐varying isotropic surface reflectance of a planar exemplar from a flash‐lit close‐up video sequence captured with a regular hand‐held mobile phone. We do not require careful calibration of the camera and lighting parameters, but instead compute a per‐pixel flow map using a deep neural network to align the input video frames. For each video frame, we also extract the reflectance parameters, and warp the neural reflectance features directly using the per‐pixel flow, and subsequently pool the warped features. Our method facilitates convenient hand‐held acquisition of spatially‐varying surface reflectance with commodity hardware by non‐expert users. Furthermore, our method enables aggregation of reflectance features from surface points visible in only a subset of the captured video frames, enabling the creation of high‐resolution reflectance maps that exceed the native camera resolution. We demonstrate and validate our method on a variety of synthetic and real‐world spatially‐varying materials.Item Design and Evaluation of Visualization Techniques to Facilitate Argument Exploration(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Khartabil, D.; Collins, C.; Wells, S.; Bach, B.; Kennedy, J.; Benes, Bedrich and Hauser, HelwigThis paper reports the design and comparison of three visualizations to represent the structure and content within arguments. Arguments are artifacts of reasoning widely used across domains such as education, policy making, and science. An is made up of sequences of statements (premises) which can support or contradict each other, individually or in groups through Boolean operators. Understanding the resulting hierarchical structure of arguments while being able to read the arguments' text poses problems related to overview, detail, and navigation. Based on interviews with argument analysts we iteratively designed three techniques, each using combinations of tree visualizations (sunburst, icicle), content display (in‐situ, tooltip) and interactive navigation. Structured discussions with the analysts show benefits of each these techniques; for example, sunburst being good in presenting overview but showing arguments in‐situ is better than pop‐ups. A controlleduser study with 21 participants and three tasks shows complementary evidence suggesting that a sunburst with pop‐up for the content is the best trade‐off solution. Our results can inform visualizations within existing argument visualization tools and increase the visibility of ‘novel‐and‐effective’ visualizations in the argument visualization community.Item An Efficient Hybrid Optimization Strategy for Surface Reconstruction(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Bertolino, Giulia; Montemurro, Marco; Perry, Nicolas; Pourroy, Franck; Benes, Bedrich and Hauser, HelwigAn efficient surface reconstruction strategy is presented in this study, which is able to approximate non‐convex sets of target points (TPs). The approach is split in two phases: (a) the mapping phase, making use of the shape preserving method (SPM) to get a proper parametrization of each sub‐domain composing the TPs set; (b) the fitting phase, where each patch is fitted by means of a suitable non‐uniform rational basis spline (NURBS) surface by considering, as design variables, all parameters involved in its definition. To this purpose, the surface fitting problem is formulated as a constrained non‐linear programming problem (CNLPP) defined over a domain having changing dimension, wherein both the number and the value of the design variables are optimized. To deal with this CNLPP, the optimization process is split in two steps. Firstly, a special genetic algorithm (GA) optimizes both the value and the number of design variables by means of a two‐level evolution strategy (species and individuals). Secondly, the solution provided by the GA constitutes the initial guess for the deterministic optimization, which aims at improving the accuracy of the fitting surfaces. The effectiveness of the proposed methodology is proven through some meaningful benchmarks taken from the literature.Item Efficient Rendering of Ocular Wavefront Aberrations using Tiled Point‐Spread Function Splatting(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Csoba, István; Kunkli, Roland; Benes, Bedrich and Hauser, HelwigVisual aberrations are the imperfections in human vision, which play an important role in our everyday lives. Existing algorithms to simulate such conditions are either not suited for low‐latency workloads or limit the kinds of supported aberrations. In this paper, we present a new simulation method that supports arbitrary visual aberrations and runs at interactive, near real‐time performance on commodity hardware. Furthermore, our method only requires a single set of on‐axis phase aberration coefficients as input and handles the dynamic change of pupil size and focus distance at runtime. We first describe a custom parametric eye model and parameter estimation method to find the physical properties of the simulated eye. Next, we talk about our parameter sampling strategy which we use with the estimated eye model to establish a coarse point‐spread function (PSF) grid. We also propose a GPU‐based interpolation scheme for the kernel grid which we use at runtime to obtain the final vision simulation by extending an existing tile‐based convolution approach. We showcase the capabilities of our eye estimation and rendering processes using several different eye conditions and provide the corresponding performance metrics to demonstrate the applicability of our method for interactive environments.Item Estimating Garment Patterns from Static Scan Data(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Bang, Seungbae; Korosteleva, Maria; Lee, Sung‐Hee; Benes, Bedrich and Hauser, HelwigThe acquisition of highly detailed static 3D scan data for people in clothing is becoming widely available. Since 3D scan data is given as a single mesh without semantic separation, in order to animate the data, it is necessary to model shape and deformation behaviour of individual body and garment parts. This paper presents a new method for generating simulation‐ready garment models from 3D static scan data of clothed humans. A key contribution of our method is a novel approach to segmenting garments by finding optimal boundaries between the skin and garment. Our boundary‐based garment segmentation method allows for stable and smooth separation of garments by using an implicit representation of the boundary and its optimization strategy. In addition, we present a novel framework to construct a 2D pattern from the segmented garment and place it around the body for a draping simulation. The effectiveness of our method is validated by generating garment patterns for a number of scan data.Item Example‐Based Colour Transfer for 3D Point Clouds(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Goudé, Ific; Cozot, Rémi; Le Meur, Olivier; Bouatouch, Kadi; Benes, Bedrich and Hauser, HelwigExample‐based colour transfer between images, which has raised a lot of interest in the past decades, consists of transferring the colour of an image to another one. Many methods based on colour distributions have been proposed, and more recently, the efficiency of neural networks has been demonstrated again for colour transfer problems. In this paper, we propose a new pipeline with methods adapted from the image domain to automatically transfer the colour from a target point cloud to an input point cloud. These colour transfer methods are based on colour distributions and account for the geometry of the point clouds to produce a coherent result. The proposed methods rely on simple statistical analysis, are effective, and succeed in transferring the colour style from one point cloud to another. The qualitative results of the colour transfers are evaluated and compared with existing methods.Item Fashion Transfer: Dressing 3D Characters from Stylized Fashion Sketches(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Fondevilla, Amelie; Rohmer, Damien; Hahmann, Stefanie; Bousseau, Adrien; Cani, Marie‐Paule; Benes, Bedrich and Hauser, HelwigFashion design often starts with hand‐drawn, expressive sketches that communicate the essence of a garment over idealized human bodies. We propose an approach to automatically dress virtual characters from such input, previously complemented with user‐annotations. In contrast to prior work requiring users to draw garments with accurate proportions over each virtual character to be dressed, our method follows a style transfer strategy : the information extracted from a single, annotated fashion sketch can be used to inform the synthesis of one to many new garment(s) with similar style, yet different proportions. In particular, we define the style of a loose garment from its silhouette and folds, which we extract from the drawing. Key to our method is our strategy to extract both shape and repetitive patterns of folds from the 2D input. As our results show, each input sketch can be used to dress a variety of characters of different morphologies, from virtual humans to cartoon‐style characters.Item Fast Ray Tracing of Scale‐Invariant Integral Surfaces(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Aydinlilar, Melike; Zanni, Cedric; Benes, Bedrich and Hauser, HelwigScale‐invariant integral surfaces, which are implicit representations of surfaces, provide a way to define smooth surfaces from skeletons with prescribed radii defined at their vertices. We introduce a new rendering pipeline allowing to visualize such surfaces in real‐time. We rely on the distance to skeleton to define a sampling strategy along the camera rays, dividing each ray into sub‐intervals. The proposed strategy is chosen to capture main field variations. Resulting intervals are processed iteratively, relying on two main ingredients; quadratic interpolation and field mapping, to an approximate squared homothetic distance. The first provides efficient root finding while the second increases the precision of the interpolation, and the combination of both results in an efficient processing routine. Finally, we present a GPU implementation that relies on a dynamic data‐structure in order to efficiently generate the intervals along the ray. This data‐structure also serves as an acceleration structure that allows constant time access to the primitives of interest during the processing of a given ray.Item Fluid Reconstruction and Editing from a Monocular Video based on the SPH Model with External Force Guidance(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Nie, Xiaoying; Hu, Yong; Su, Zhiyuan; Shen, Xukun; Benes, Bedrich and Hauser, HelwigWe specifically present a general method for monocular fluid videos to reconstruct and edit 3D fluid volume. Although researchers have developed many monocular video‐based methods, the reconstructed results are merely one layer of geometry surface, lack of accurate physical attributes of fluids, and challenging to edit fluid. We obtain a high‐quality 3D fluid volume by extending the smoothed particle hydrodynamics (SPH) model with external force guidance. For reconstructing fluid, we design target particles that are recovered from the shape from shading (SFS) method and initialize fluid particles that are spatially consistent with target particles. For editing fluid, we translate the deformation of target particles into the 3D fluid volume by merging user‐specified features of interest. Separating the low‐ and high‐frequency height field allows us to efficiently solve the motion equations for a liquid while retaining enough details to obtain realistic‐looking behaviours. Our experimental results compare favourably to the state‐of‐the‐art in terms of global fluid volume motion features and fluid surface details and demonstrate our model can achieve desirable and pleasing effects.Item From Noon to Sunset: Interactive Rendering, Relighting, and Recolouring of Landscape Photographs by Modifying Solar Position(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Türe, Murat; Çıklabakkal, Mustafa Ege; Erdem, Aykut; Erdem, Erkut; Satılmış, Pinar; Akyüz, Ahmet Oguz; Benes, Bedrich and Hauser, HelwigImage editing is a commonly studied problem in computer graphics. Despite the presence of many advanced editing tools, there is no satisfactory solution to controllably update the position of the sun using a single image. This problem is made complicated by the presence of clouds, complex landscapes, and the atmospheric effects that must be accounted for. In this paper, we tackle this problem starting with only a single photograph. With the user clicking on the initial position of the sun, our algorithm performs several estimation and segmentation processes for finding the horizon, scene depth, clouds, and the sky line. After this initial process, the user can make both fine‐ and large‐scale changes on the position of the sun: it can be set beneath the mountains or moved behind the clouds practically turning a midday photograph into a sunset (or vice versa). We leverage a precomputed atmospheric scattering algorithm to make all of these changes not only realistic but also in real‐time. We demonstrate our results using both clear and cloudy skies, showing how to add, remove, and relight clouds, all the while allowing for advanced effects such as scattering, shadows, light shafts, and lens flares.Item Half‐body Portrait Relighting with Overcomplete Lighting Representation(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Song, Guoxian; Cham, Tat‐Jen; Cai, Jianfei; Zheng, Jianmin; Benes, Bedrich and Hauser, HelwigWe present a neural‐based model for relighting a half‐body portrait image by simply referring to another portrait image with the desired lighting condition. Rather than following classical inverse rendering methodology that involves estimating normals, albedo and environment maps, we implicitly encode the subject and lighting in a latent space, and use these latent codes to generate relighted images by neural rendering. A key technical innovation is the use of a novel overcomplete lighting representation, which facilitates lighting interpolation in the latent space, as well as helping regularize the self‐organization of the lighting latent space during training. In addition, we propose a novel multiplicative neural render that more effectively combines the subject and lighting latent codes for rendering. We also created a large‐scale photorealistic rendered relighting dataset for training, which allows our model to generalize well to real images. Extensive experiments demonstrate that our system not only outperforms existing methods for referral‐based portrait relighting, but also has the capability generate sequences of relighted images via lighting rotations.Item IMAT: The Iterative Medial Axis Transform(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Lee, Yonghyeon; Baek, Jonghyuk; Kim, Young Min; Park, Frank Chongwoo; Benes, Bedrich and Hauser, HelwigWe present the iterative medial axis transform (IMAT), an iterative descent method that constructs a medial axis transform (MAT) for a sparse, noisy, oriented point cloud sampled from an object's boundary. We first establish the equivalence between the traditional definition of the MAT of an object, i.e., the set of centres and corresponding radii of all balls maximally inscribed inside the object, with an alternative characterization matching the boundary enclosing the union of the balls with the object boundary. Based on this boundary equivalence characterization, a new MAT algorithm is proposed, in which an error function that reflects the difference between the two boundaries is minimized while restricting the number of balls to within some a priori specified upper limit. An iterative descent method with guaranteed local convergence is developed for the minimization that is also amenable to parallelization. Both quantitative and qualitative analyses of diverse 2D and 3D objects demonstrate the noise robustness, shape fidelity, and representation efficiency of the resulting MAT.Item Inverse Dynamics Filtering for Sampling‐based Motion Control(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Xie, Kaixiang; Kry, Paul G.; Benes, Bedrich and Hauser, HelwigWe improve the sampling‐based motion control method proposed by Liu et al. using inverse dynamics. To deal with noise in the motion capture we filter the motion data using a Butterworth filter where we choose the cutoff frequency such that the zero‐moment point falls within the support polygon for the greatest number of frames. We discuss how to detect foot contact for foot and ground optimization and inverse dynamics, and we optimize to increase the area of supporting polygon. Sample simulations receive filtered inverse dynamics torques at frames where the ZMP is sufficiently close to the support polygon, which simplifies the problem of finding the PD targets that produce physically valid control matching the target motion. We test our method on different motions and we demonstrate that our method has lower error, higher success rates, and generally produces smoother results.Item Issue Information(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Benes, Bedrich and Hauser, HelwigItem Linear Polarization Demosaicking for Monochrome and Colour Polarization Focal Plane Arrays(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Qiu, Simeng; Fu, Qiang; Wang, Congli; Heidrich, Wolfgang; Benes, Bedrich and Hauser, HelwigDivision‐of‐focal‐plane (DoFP) polarization image sensors allow for snapshot imaging of linear polarization effects with inexpensive and straightforward setups. However, conventional interpolation based image reconstruction methods for such sensors produce unreliable and noisy estimates of quantities such as Degree of Linear Polarization (DoLP) or Angle of Linear Polarization (AoLP). In this paper, we propose a polarization demosaicking algorithm by inverting the polarization image formation model for both monochrome and colour DoFP cameras. Compared to previous interpolation methods, our approach can significantly reduce noise induced artefacts and drastically increase the accuracy in estimating polarization states. We evaluate and demonstrate the performance of the methods on a new high‐resolution colour polarization dataset. Simulation and experimental results show that the proposed reconstruction and analysis tools offer an effective solution to polarization imaging.Item Neural BRDF Representation and Importance Sampling(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Sztrajman, Alejandro; Rainer, Gilles; Ritschel, Tobias; Weyrich, Tim; Benes, Bedrich and Hauser, HelwigControlled capture of real‐world material appearance yields tabulated sets of highly realistic reflectance data. In practice, however, its high memory footprint requires compressing into a representation that can be used efficiently in rendering while remaining faithful to the original. Previous works in appearance encoding often prioritized one of these requirements at the expense of the other, by either applying high‐fidelity array compression strategies not suited for efficient queries during rendering, or by fitting a compact analytic model that lacks expressiveness. We present a compact neural network‐based representation of BRDF data that combines high‐accuracy reconstruction with efficient practical rendering via built‐in interpolation of reflectance. We encode BRDFs as lightweight networks, and propose a training scheme with adaptive angular sampling, critical for the accurate reconstruction of specular highlights. Additionally, we propose a novel approach to make our representation amenable to importance sampling: rather than inverting the trained networks, we learn to encode them in a more compact embedding that can be mapped to parameters of an analytic BRDF for which importance sampling is known. We evaluate encoding results on isotropic and anisotropic BRDFs from multiple real‐world datasets, and importance sampling performance for isotropic BRDFs mapped to two different analytic models.Item Neural Modelling of Flower Bas‐relief from 2D Line Drawing(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Zhang, Yu‐Wei; Wang, Jinlei; Wang, Wenping; Chen, Yanzhao; Liu, Hui; Ji, Zhongping; Zhang, Caiming; Benes, Bedrich and Hauser, HelwigDifferent from other types of bas‐reliefs, a flower bas‐relief contains a large number of depth‐discontinuity edges. Most existing line‐based methods reconstruct free‐form surfaces by ignoring the depth‐discontinuities, thus are less efficient in modeling flower bas‐reliefs. This paper presents a neural‐based solution which benefits from the recent advances in CNN. Specially, we use line gradients to encode the depth orderings at leaf edges. Given a line drawing, a heuristic method is first proposed to compute 2D gradients at lines. Line gradients and dense curvatures interpolated from sparse user inputs are then fed into a neural network, which outputs depths and normals of the final bas‐relief. In addition, we introduce an object‐based method to generate flower bas‐reliefs and line drawings for network training. Extensive experiments show that our method is effective in modelling bas‐reliefs with depth‐discontinuity edges. User evaluation also shows that our method is intuitive and accessible to common users.