40-Issue 6
Permanent URI for this collection
Browse
Browsing 40-Issue 6 by Issue Date
Now showing 1 - 20 of 31
Results Per Page
Sort Options
Item Parametric Skeletons with Reduced Soft‐Tissue Deformations(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Tapia, Javier; Romero, Cristian; Pérez, Jesús; Otaduy, Miguel A.; Benes, Bedrich and Hauser, HelwigWe present a method to augment parametric skeletal models with subspace soft‐tissue deformations. We combine the benefits of data‐driven skeletal models, i.e. accurate replication of contact‐free static deformations, with the benefits of pure physics‐based models, i.e. skin and skeletal reaction to contact and inertial motion with two‐way coupling. We succeed to do so in a highly efficient manner, thanks to a careful choice of reduced model for the subspace deformation. With our method, it is easy to design expressive reduced models with efficient yet accurate force computations, without the need for training deformation examples. We demonstrate the application of our method to parametric models of human bodies, SMPL, and hands, MANO, with interactive simulations of contact with nonlinear soft‐tissue deformation and skeletal response.>Item Deep Reflectance Scanning: Recovering Spatially‐varying Material Appearance from a Flash‐lit Video Sequence(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Ye, Wenjie; Dong, Yue; Peers, Pieter; Guo, Baining; Benes, Bedrich and Hauser, HelwigIn this paper we present a novel method for recovering high‐resolution spatially‐varying isotropic surface reflectance of a planar exemplar from a flash‐lit close‐up video sequence captured with a regular hand‐held mobile phone. We do not require careful calibration of the camera and lighting parameters, but instead compute a per‐pixel flow map using a deep neural network to align the input video frames. For each video frame, we also extract the reflectance parameters, and warp the neural reflectance features directly using the per‐pixel flow, and subsequently pool the warped features. Our method facilitates convenient hand‐held acquisition of spatially‐varying surface reflectance with commodity hardware by non‐expert users. Furthermore, our method enables aggregation of reflectance features from surface points visible in only a subset of the captured video frames, enabling the creation of high‐resolution reflectance maps that exceed the native camera resolution. We demonstrate and validate our method on a variety of synthetic and real‐world spatially‐varying materials.Item Inverse Dynamics Filtering for Sampling‐based Motion Control(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Xie, Kaixiang; Kry, Paul G.; Benes, Bedrich and Hauser, HelwigWe improve the sampling‐based motion control method proposed by Liu et al. using inverse dynamics. To deal with noise in the motion capture we filter the motion data using a Butterworth filter where we choose the cutoff frequency such that the zero‐moment point falls within the support polygon for the greatest number of frames. We discuss how to detect foot contact for foot and ground optimization and inverse dynamics, and we optimize to increase the area of supporting polygon. Sample simulations receive filtered inverse dynamics torques at frames where the ZMP is sufficiently close to the support polygon, which simplifies the problem of finding the PD targets that produce physically valid control matching the target motion. We test our method on different motions and we demonstrate that our method has lower error, higher success rates, and generally produces smoother results.Item Fashion Transfer: Dressing 3D Characters from Stylized Fashion Sketches(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Fondevilla, Amelie; Rohmer, Damien; Hahmann, Stefanie; Bousseau, Adrien; Cani, Marie‐Paule; Benes, Bedrich and Hauser, HelwigFashion design often starts with hand‐drawn, expressive sketches that communicate the essence of a garment over idealized human bodies. We propose an approach to automatically dress virtual characters from such input, previously complemented with user‐annotations. In contrast to prior work requiring users to draw garments with accurate proportions over each virtual character to be dressed, our method follows a style transfer strategy : the information extracted from a single, annotated fashion sketch can be used to inform the synthesis of one to many new garment(s) with similar style, yet different proportions. In particular, we define the style of a loose garment from its silhouette and folds, which we extract from the drawing. Key to our method is our strategy to extract both shape and repetitive patterns of folds from the 2D input. As our results show, each input sketch can be used to dress a variety of characters of different morphologies, from virtual humans to cartoon‐style characters.Item Fast Ray Tracing of Scale‐Invariant Integral Surfaces(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Aydinlilar, Melike; Zanni, Cedric; Benes, Bedrich and Hauser, HelwigScale‐invariant integral surfaces, which are implicit representations of surfaces, provide a way to define smooth surfaces from skeletons with prescribed radii defined at their vertices. We introduce a new rendering pipeline allowing to visualize such surfaces in real‐time. We rely on the distance to skeleton to define a sampling strategy along the camera rays, dividing each ray into sub‐intervals. The proposed strategy is chosen to capture main field variations. Resulting intervals are processed iteratively, relying on two main ingredients; quadratic interpolation and field mapping, to an approximate squared homothetic distance. The first provides efficient root finding while the second increases the precision of the interpolation, and the combination of both results in an efficient processing routine. Finally, we present a GPU implementation that relies on a dynamic data‐structure in order to efficiently generate the intervals along the ray. This data‐structure also serves as an acceleration structure that allows constant time access to the primitives of interest during the processing of a given ray.Item Self‐Supervised Learning of Part Mobility from Point Cloud Sequence(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Shi, Yahao; Cao, Xinyu; Zhou, Bin; Benes, Bedrich and Hauser, HelwigPart mobility analysis is a significant aspect required to achieve a functional understanding of 3D objects. It would be natural to obtain part mobility from the continuous part motion of 3D objects. In this study, we introduce a self‐supervised method for segmenting motion parts and predicting their motion attributes from a point cloud sequence representing a dynamic object. To sufficiently utilize spatiotemporal information from the point cloud sequence, we generate trajectories by using correlations among successive frames of the sequence instead of directly processing the point clouds. We propose a novel neural network architecture called PointRNN to learn feature representations of trajectories along with their part rigid motions. We evaluate our method on various tasks including motion part segmentation, motion axis prediction and motion range estimation. The results demon strate that our method outperforms previous techniques on both synthetic and real datasets. Moreover, our method has the ability to generalize to new and unseen objects. It is important to emphasize that it is not required to know any prior shape structure, prior shape category information or shape orientation. To the best of our knowledge, this is the first study on deep learning to extract part mobility from point cloud sequence of a dynamic object.Item An Efficient Hybrid Optimization Strategy for Surface Reconstruction(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Bertolino, Giulia; Montemurro, Marco; Perry, Nicolas; Pourroy, Franck; Benes, Bedrich and Hauser, HelwigAn efficient surface reconstruction strategy is presented in this study, which is able to approximate non‐convex sets of target points (TPs). The approach is split in two phases: (a) the mapping phase, making use of the shape preserving method (SPM) to get a proper parametrization of each sub‐domain composing the TPs set; (b) the fitting phase, where each patch is fitted by means of a suitable non‐uniform rational basis spline (NURBS) surface by considering, as design variables, all parameters involved in its definition. To this purpose, the surface fitting problem is formulated as a constrained non‐linear programming problem (CNLPP) defined over a domain having changing dimension, wherein both the number and the value of the design variables are optimized. To deal with this CNLPP, the optimization process is split in two steps. Firstly, a special genetic algorithm (GA) optimizes both the value and the number of design variables by means of a two‐level evolution strategy (species and individuals). Secondly, the solution provided by the GA constitutes the initial guess for the deterministic optimization, which aims at improving the accuracy of the fitting surfaces. The effectiveness of the proposed methodology is proven through some meaningful benchmarks taken from the literature.Item From Noon to Sunset: Interactive Rendering, Relighting, and Recolouring of Landscape Photographs by Modifying Solar Position(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Türe, Murat; Çıklabakkal, Mustafa Ege; Erdem, Aykut; Erdem, Erkut; Satılmış, Pinar; Akyüz, Ahmet Oguz; Benes, Bedrich and Hauser, HelwigImage editing is a commonly studied problem in computer graphics. Despite the presence of many advanced editing tools, there is no satisfactory solution to controllably update the position of the sun using a single image. This problem is made complicated by the presence of clouds, complex landscapes, and the atmospheric effects that must be accounted for. In this paper, we tackle this problem starting with only a single photograph. With the user clicking on the initial position of the sun, our algorithm performs several estimation and segmentation processes for finding the horizon, scene depth, clouds, and the sky line. After this initial process, the user can make both fine‐ and large‐scale changes on the position of the sun: it can be set beneath the mountains or moved behind the clouds practically turning a midday photograph into a sunset (or vice versa). We leverage a precomputed atmospheric scattering algorithm to make all of these changes not only realistic but also in real‐time. We demonstrate our results using both clear and cloudy skies, showing how to add, remove, and relight clouds, all the while allowing for advanced effects such as scattering, shadows, light shafts, and lens flares.Item Example‐Based Colour Transfer for 3D Point Clouds(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Goudé, Ific; Cozot, Rémi; Le Meur, Olivier; Bouatouch, Kadi; Benes, Bedrich and Hauser, HelwigExample‐based colour transfer between images, which has raised a lot of interest in the past decades, consists of transferring the colour of an image to another one. Many methods based on colour distributions have been proposed, and more recently, the efficiency of neural networks has been demonstrated again for colour transfer problems. In this paper, we propose a new pipeline with methods adapted from the image domain to automatically transfer the colour from a target point cloud to an input point cloud. These colour transfer methods are based on colour distributions and account for the geometry of the point clouds to produce a coherent result. The proposed methods rely on simple statistical analysis, are effective, and succeed in transferring the colour style from one point cloud to another. The qualitative results of the colour transfers are evaluated and compared with existing methods.Item SREC‐RT: A Structure for Ray Tracing Rounded Edges and Corners(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Courtin, Simon; Ribardière, Mickael; Horna, Sebastien; Poulin, Pierre; Meneveaux, Daniel; Benes, Bedrich and Hauser, HelwigMan‐made objects commonly exhibit rounded edges and corners generated through their manufacturing processes. The variation of surface normals at these confined locations produces shading details that are visually essential to the realism of synthetic scenes. The more specular the surface, the finer and more prominent its highlights. However, most geometric modellers represent rounded edges and corners with dense polygonal meshes that are limited in terms of smoothness, while tremendously increasing scene complexity. This paper proposes a non‐invasive method (i.e. that does not modify the original geometry) for the modelling and rendering of smooth edges and corners from any input polygonal geometry defined with infinitely sharp edges. At the heart of our contribution is a geometric structure that automatically and accurately defines the geometry of edge and corner rounded areas, as well as the topological relationships at edges and vertices. This structure, called SREC‐RT, is integrated in a ray‐tracing‐based acceleration structure in order to determine the region of interest of each rounded edge and corner. It allows systematic rounding of all edges and vertices without increasing the 3D scene geometric complexity. While the underlying rounded geometry can be of any type, we propose a practical ray‐edge and ray‐corner intersection based on parametric surfaces. We analyse comparisons generated with existing methods. Our results present the advantages of our method, including extreme close‐up views of surfaces with a much higher quality for very little additional memory, and reasonable computation time overhead.Item Visualization of Tensor Fields in Mechanics(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Hergl, Chiara; Blecha, Christian; Kretzschmar, Vanessa; Raith, Felix; Günther, Fabian; Stommel, Markus; Jankowai, Jochen; Hotz, Ingrid; Nagel, Thomas; Scheuermann, Gerik; Benes, Bedrich and Hauser, HelwigTensors are used to describe complex physical processes in many applications. Examples include the distribution of stresses in technical materials, acting forces during seismic events, or remodeling of biological tissues. While tensors encode such complex information mathematically precisely, the semantic interpretation of a tensor is challenging. Visualization can be beneficial here and is frequently used by domain experts. Typical strategies include the use of glyphs, color plots, lines, and isosurfaces. However, data complexity is nowadays accompanied by the sheer amount of data produced by large‐scale simulations and adds another level of obstruction between user and data. Given the limitations of traditional methods, and the extra cognitive effort of simple methods, more advanced tensor field visualization approaches have been the focus of this work. This survey aims to provide an overview of recent research results with a strong application‐oriented focus, targeting applications based on continuum mechanics, namely the fields of structural, bio‐, and geomechanics. As such, the survey is complementing and extending previously published surveys. Its utility is twofold: (i) It serves as basis for the visualization community to get an overview of recent visualization techniques. (ii) It emphasizes and explains the necessity for further research for visualizations in this context.Item Action Unit Driven Facial Expression Synthesis from a Single Image with Patch Attentive GAN(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Zhao, Yong; Yang, Le; Pei, Ercheng; Oveneke, Meshia Cédric; Alioscha‐Perez, Mitchel; Li, Longfei; Jiang, Dongmei; Sahli, Hichem; Benes, Bedrich and Hauser, HelwigRecent advances in generative adversarial networks (GANs) have shown tremendous success for facial expression generation tasks. However, generating vivid and expressive facial expressions at Action Units (AUs) level is still challenging, due to the fact that automatic facial expression analysis for AU intensity itself is an unsolved difficult task. In this paper, we propose a novel synthesis‐by‐analysis approach by leveraging the power of GAN framework and state‐of‐the‐art AU detection model to achieve better results for AU‐driven facial expression generation. Specifically, we design a novel discriminator architecture by modifying the patch‐attentive AU detection network for AU intensity estimation and combine it with a global image encoder for adversarial learning to force the generator to produce more expressive and realistic facial images. We also introduce a balanced sampling approach to alleviate the imbalanced learning problem for AU synthesis. Extensive experimental results on DISFA and DISFA+ show that our approach outperforms the state‐of‐the‐art in terms of photo‐realism and expressiveness of the facial expression quantitatively and qualitatively.Item Fluid Reconstruction and Editing from a Monocular Video based on the SPH Model with External Force Guidance(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Nie, Xiaoying; Hu, Yong; Su, Zhiyuan; Shen, Xukun; Benes, Bedrich and Hauser, HelwigWe specifically present a general method for monocular fluid videos to reconstruct and edit 3D fluid volume. Although researchers have developed many monocular video‐based methods, the reconstructed results are merely one layer of geometry surface, lack of accurate physical attributes of fluids, and challenging to edit fluid. We obtain a high‐quality 3D fluid volume by extending the smoothed particle hydrodynamics (SPH) model with external force guidance. For reconstructing fluid, we design target particles that are recovered from the shape from shading (SFS) method and initialize fluid particles that are spatially consistent with target particles. For editing fluid, we translate the deformation of target particles into the 3D fluid volume by merging user‐specified features of interest. Separating the low‐ and high‐frequency height field allows us to efficiently solve the motion equations for a liquid while retaining enough details to obtain realistic‐looking behaviours. Our experimental results compare favourably to the state‐of‐the‐art in terms of global fluid volume motion features and fluid surface details and demonstrate our model can achieve desirable and pleasing effects.Item Deep Neural Models for Illumination Estimation and Relighting: A Survey(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Einabadi, Farshad; Guillemaut, Jean‐Yves; Hilton, Adrian; Benes, Bedrich and Hauser, HelwigScene relighting and estimating illumination of a real scene for insertion of virtual objects in a mixed‐reality scenario are well‐studied challenges in the computer vision and graphics fields. Classical inverse rendering approaches aim to decompose a scene into its orthogonal constituting elements, namely scene geometry, illumination and surface materials, which can later be used for augmented reality or to render new images under novel lighting or viewpoints. Recently, the application of deep neural computing to illumination estimation, relighting and inverse rendering has shown promising results. This contribution aims to bring together in a coherent manner current advances in this conjunction. We examine in detail the attributes of the proposed approaches, presented in three categories: scene illumination estimation, relighting with reflectance‐aware scene‐specific representations and finally relighting as image‐to‐image transformations. Each category is concluded with a discussion on the main characteristics of the current methods and possible future trends. We also provide an overview of current publicly available datasets for neural lighting applications.Item Customized Summarizations of Visual Data Collections(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Yuan, Mengke; Ghanem, Bernard; Yan, Dong‐Ming; Wu, Baoyuan; Zhang, Xiaopeng; Wonka, Peter; Benes, Bedrich and Hauser, HelwigWe propose a framework to generate customized summarizations of visual data collections, such as collections of images, materials, 3D shapes, and 3D scenes. We assume that the elements in the visual data collections can be mapped to a set of vectors in a feature space, in which a fitness score for each element can be defined, and we pose the problem of customized summarizations as selecting a subset of these elements. We first describe the design choices a user should be able to specify for modeling customized summarizations and propose a corresponding user interface. We then formulate the problem as a constrained optimization problem with binary variables and propose a practical and fast algorithm based on the alternating direction method of multipliers (ADMM). Our results show that our problem formulation enables a wide variety of customized summarizations, and that our solver is both significantly faster than state‐of‐the‐art commercial integer programming solvers and produces better solutions than fast relaxation‐based solvers.Item IMAT: The Iterative Medial Axis Transform(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Lee, Yonghyeon; Baek, Jonghyuk; Kim, Young Min; Park, Frank Chongwoo; Benes, Bedrich and Hauser, HelwigWe present the iterative medial axis transform (IMAT), an iterative descent method that constructs a medial axis transform (MAT) for a sparse, noisy, oriented point cloud sampled from an object's boundary. We first establish the equivalence between the traditional definition of the MAT of an object, i.e., the set of centres and corresponding radii of all balls maximally inscribed inside the object, with an alternative characterization matching the boundary enclosing the union of the balls with the object boundary. Based on this boundary equivalence characterization, a new MAT algorithm is proposed, in which an error function that reflects the difference between the two boundaries is minimized while restricting the number of balls to within some a priori specified upper limit. An iterative descent method with guaranteed local convergence is developed for the minimization that is also amenable to parallelization. Both quantitative and qualitative analyses of diverse 2D and 3D objects demonstrate the noise robustness, shape fidelity, and representation efficiency of the resulting MAT.Item Neural BRDF Representation and Importance Sampling(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Sztrajman, Alejandro; Rainer, Gilles; Ritschel, Tobias; Weyrich, Tim; Benes, Bedrich and Hauser, HelwigControlled capture of real‐world material appearance yields tabulated sets of highly realistic reflectance data. In practice, however, its high memory footprint requires compressing into a representation that can be used efficiently in rendering while remaining faithful to the original. Previous works in appearance encoding often prioritized one of these requirements at the expense of the other, by either applying high‐fidelity array compression strategies not suited for efficient queries during rendering, or by fitting a compact analytic model that lacks expressiveness. We present a compact neural network‐based representation of BRDF data that combines high‐accuracy reconstruction with efficient practical rendering via built‐in interpolation of reflectance. We encode BRDFs as lightweight networks, and propose a training scheme with adaptive angular sampling, critical for the accurate reconstruction of specular highlights. Additionally, we propose a novel approach to make our representation amenable to importance sampling: rather than inverting the trained networks, we learn to encode them in a more compact embedding that can be mapped to parameters of an analytic BRDF for which importance sampling is known. We evaluate encoding results on isotropic and anisotropic BRDFs from multiple real‐world datasets, and importance sampling performance for isotropic BRDFs mapped to two different analytic models.Item Visual Analysis of Large‐Scale Protein‐Ligand Interaction Data(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Schatz, Karsten; Franco‐Moreno, Juan José; Schäfer, Marco; Rose, Alexander S.; Ferrario, Valerio; Pleiss, Jürgen; Vázquez, Pere‐Pau; Ertl, Thomas; Krone, Michael; Benes, Bedrich and Hauser, HelwigWhen studying protein‐ligand interactions, many different factors can influence the behaviour of the protein as well as the ligands. Molecular visualisation tools typically concentrate on the movement of single ligand molecules; however, viewing only one molecule can merely provide a hint of the overall behaviour of the system. To tackle this issue, we do not focus on the visualisation of the local actions of individual ligand molecules but on the influence of a protein and their overall movement. Since the simulations required to study these problems can have millions of time steps, our presented system decouples visualisation and data preprocessing: our preprocessing pipeline aggregates the movement of ligand molecules relative to a receptor protein. For data analysis, we present a web‐based visualisation application that combines multiple linked 2D and 3D views that display the previously calculated data The central view, a novel enhanced sequence diagram that shows the calculated values, is linked to a traditional surface visualisation of the protein. This results in an interactive visualisation that is independent of the size of the underlying data, since the memory footprint of the aggregated data for visualisation is constant and very low, even if the raw input consisted of several terabytes.Item Visualizing and Interacting with Geospatial Networks: A Survey and Design Space(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Schöttler, Sarah; Yang, Yalong; Pfister, Hanspeter; Bach, Benjamin; Benes, Bedrich and Hauser, HelwigThis paper surveys visualization and interaction techniques for geospatial networks from a total of 95 papers. Geospatial networks are graphs where nodes and links can be associated with geographic locations. Examples can include social networks, trade and migration, as well as traffic and transport networks. Visualizing geospatial networks poses numerous challenges around the integration of both network and geographical information as well as additional information such as node and link attributes, time and uncertainty. Our overview analyses existing techniques along four dimensions: (i) the representation of geographical information, (ii) the representation of network information, (iii) the visual integration of both and (iv) the use of interaction. These four dimensions allow us to discuss techniques with respect to the trade‐offs they make between showing information across all these dimensions and how they solve the problem of showing as much information as necessary while maintaining readability of the visualization. .Item Neural Modelling of Flower Bas‐relief from 2D Line Drawing(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Zhang, Yu‐Wei; Wang, Jinlei; Wang, Wenping; Chen, Yanzhao; Liu, Hui; Ji, Zhongping; Zhang, Caiming; Benes, Bedrich and Hauser, HelwigDifferent from other types of bas‐reliefs, a flower bas‐relief contains a large number of depth‐discontinuity edges. Most existing line‐based methods reconstruct free‐form surfaces by ignoring the depth‐discontinuities, thus are less efficient in modeling flower bas‐reliefs. This paper presents a neural‐based solution which benefits from the recent advances in CNN. Specially, we use line gradients to encode the depth orderings at leaf edges. Given a line drawing, a heuristic method is first proposed to compute 2D gradients at lines. Line gradients and dense curvatures interpolated from sparse user inputs are then fed into a neural network, which outputs depths and normals of the final bas‐relief. In addition, we introduce an object‐based method to generate flower bas‐reliefs and line drawings for network training. Extensive experiments show that our method is effective in modelling bas‐reliefs with depth‐discontinuity edges. User evaluation also shows that our method is intuitive and accessible to common users.