40-Issue 6
Permanent URI for this collection
Browse
Browsing 40-Issue 6 by Issue Date
Now showing 1 - 20 of 31
Results Per Page
Sort Options
Item Example‐Based Colour Transfer for 3D Point Clouds(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Goudé, Ific; Cozot, Rémi; Le Meur, Olivier; Bouatouch, Kadi; Benes, Bedrich and Hauser, HelwigExample‐based colour transfer between images, which has raised a lot of interest in the past decades, consists of transferring the colour of an image to another one. Many methods based on colour distributions have been proposed, and more recently, the efficiency of neural networks has been demonstrated again for colour transfer problems. In this paper, we propose a new pipeline with methods adapted from the image domain to automatically transfer the colour from a target point cloud to an input point cloud. These colour transfer methods are based on colour distributions and account for the geometry of the point clouds to produce a coherent result. The proposed methods rely on simple statistical analysis, are effective, and succeed in transferring the colour style from one point cloud to another. The qualitative results of the colour transfers are evaluated and compared with existing methods.Item SREC‐RT: A Structure for Ray Tracing Rounded Edges and Corners(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Courtin, Simon; Ribardière, Mickael; Horna, Sebastien; Poulin, Pierre; Meneveaux, Daniel; Benes, Bedrich and Hauser, HelwigMan‐made objects commonly exhibit rounded edges and corners generated through their manufacturing processes. The variation of surface normals at these confined locations produces shading details that are visually essential to the realism of synthetic scenes. The more specular the surface, the finer and more prominent its highlights. However, most geometric modellers represent rounded edges and corners with dense polygonal meshes that are limited in terms of smoothness, while tremendously increasing scene complexity. This paper proposes a non‐invasive method (i.e. that does not modify the original geometry) for the modelling and rendering of smooth edges and corners from any input polygonal geometry defined with infinitely sharp edges. At the heart of our contribution is a geometric structure that automatically and accurately defines the geometry of edge and corner rounded areas, as well as the topological relationships at edges and vertices. This structure, called SREC‐RT, is integrated in a ray‐tracing‐based acceleration structure in order to determine the region of interest of each rounded edge and corner. It allows systematic rounding of all edges and vertices without increasing the 3D scene geometric complexity. While the underlying rounded geometry can be of any type, we propose a practical ray‐edge and ray‐corner intersection based on parametric surfaces. We analyse comparisons generated with existing methods. Our results present the advantages of our method, including extreme close‐up views of surfaces with a much higher quality for very little additional memory, and reasonable computation time overhead.Item Fashion Transfer: Dressing 3D Characters from Stylized Fashion Sketches(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Fondevilla, Amelie; Rohmer, Damien; Hahmann, Stefanie; Bousseau, Adrien; Cani, Marie‐Paule; Benes, Bedrich and Hauser, HelwigFashion design often starts with hand‐drawn, expressive sketches that communicate the essence of a garment over idealized human bodies. We propose an approach to automatically dress virtual characters from such input, previously complemented with user‐annotations. In contrast to prior work requiring users to draw garments with accurate proportions over each virtual character to be dressed, our method follows a style transfer strategy : the information extracted from a single, annotated fashion sketch can be used to inform the synthesis of one to many new garment(s) with similar style, yet different proportions. In particular, we define the style of a loose garment from its silhouette and folds, which we extract from the drawing. Key to our method is our strategy to extract both shape and repetitive patterns of folds from the 2D input. As our results show, each input sketch can be used to dress a variety of characters of different morphologies, from virtual humans to cartoon‐style characters.Item IMAT: The Iterative Medial Axis Transform(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Lee, Yonghyeon; Baek, Jonghyuk; Kim, Young Min; Park, Frank Chongwoo; Benes, Bedrich and Hauser, HelwigWe present the iterative medial axis transform (IMAT), an iterative descent method that constructs a medial axis transform (MAT) for a sparse, noisy, oriented point cloud sampled from an object's boundary. We first establish the equivalence between the traditional definition of the MAT of an object, i.e., the set of centres and corresponding radii of all balls maximally inscribed inside the object, with an alternative characterization matching the boundary enclosing the union of the balls with the object boundary. Based on this boundary equivalence characterization, a new MAT algorithm is proposed, in which an error function that reflects the difference between the two boundaries is minimized while restricting the number of balls to within some a priori specified upper limit. An iterative descent method with guaranteed local convergence is developed for the minimization that is also amenable to parallelization. Both quantitative and qualitative analyses of diverse 2D and 3D objects demonstrate the noise robustness, shape fidelity, and representation efficiency of the resulting MAT.Item From Noon to Sunset: Interactive Rendering, Relighting, and Recolouring of Landscape Photographs by Modifying Solar Position(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Türe, Murat; Çıklabakkal, Mustafa Ege; Erdem, Aykut; Erdem, Erkut; Satılmış, Pinar; Akyüz, Ahmet Oguz; Benes, Bedrich and Hauser, HelwigImage editing is a commonly studied problem in computer graphics. Despite the presence of many advanced editing tools, there is no satisfactory solution to controllably update the position of the sun using a single image. This problem is made complicated by the presence of clouds, complex landscapes, and the atmospheric effects that must be accounted for. In this paper, we tackle this problem starting with only a single photograph. With the user clicking on the initial position of the sun, our algorithm performs several estimation and segmentation processes for finding the horizon, scene depth, clouds, and the sky line. After this initial process, the user can make both fine‐ and large‐scale changes on the position of the sun: it can be set beneath the mountains or moved behind the clouds practically turning a midday photograph into a sunset (or vice versa). We leverage a precomputed atmospheric scattering algorithm to make all of these changes not only realistic but also in real‐time. We demonstrate our results using both clear and cloudy skies, showing how to add, remove, and relight clouds, all the while allowing for advanced effects such as scattering, shadows, light shafts, and lens flares.Item Action Unit Driven Facial Expression Synthesis from a Single Image with Patch Attentive GAN(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Zhao, Yong; Yang, Le; Pei, Ercheng; Oveneke, Meshia Cédric; Alioscha‐Perez, Mitchel; Li, Longfei; Jiang, Dongmei; Sahli, Hichem; Benes, Bedrich and Hauser, HelwigRecent advances in generative adversarial networks (GANs) have shown tremendous success for facial expression generation tasks. However, generating vivid and expressive facial expressions at Action Units (AUs) level is still challenging, due to the fact that automatic facial expression analysis for AU intensity itself is an unsolved difficult task. In this paper, we propose a novel synthesis‐by‐analysis approach by leveraging the power of GAN framework and state‐of‐the‐art AU detection model to achieve better results for AU‐driven facial expression generation. Specifically, we design a novel discriminator architecture by modifying the patch‐attentive AU detection network for AU intensity estimation and combine it with a global image encoder for adversarial learning to force the generator to produce more expressive and realistic facial images. We also introduce a balanced sampling approach to alleviate the imbalanced learning problem for AU synthesis. Extensive experimental results on DISFA and DISFA+ show that our approach outperforms the state‐of‐the‐art in terms of photo‐realism and expressiveness of the facial expression quantitatively and qualitatively.Item Fluid Reconstruction and Editing from a Monocular Video based on the SPH Model with External Force Guidance(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Nie, Xiaoying; Hu, Yong; Su, Zhiyuan; Shen, Xukun; Benes, Bedrich and Hauser, HelwigWe specifically present a general method for monocular fluid videos to reconstruct and edit 3D fluid volume. Although researchers have developed many monocular video‐based methods, the reconstructed results are merely one layer of geometry surface, lack of accurate physical attributes of fluids, and challenging to edit fluid. We obtain a high‐quality 3D fluid volume by extending the smoothed particle hydrodynamics (SPH) model with external force guidance. For reconstructing fluid, we design target particles that are recovered from the shape from shading (SFS) method and initialize fluid particles that are spatially consistent with target particles. For editing fluid, we translate the deformation of target particles into the 3D fluid volume by merging user‐specified features of interest. Separating the low‐ and high‐frequency height field allows us to efficiently solve the motion equations for a liquid while retaining enough details to obtain realistic‐looking behaviours. Our experimental results compare favourably to the state‐of‐the‐art in terms of global fluid volume motion features and fluid surface details and demonstrate our model can achieve desirable and pleasing effects.Item Neural Modelling of Flower Bas‐relief from 2D Line Drawing(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Zhang, Yu‐Wei; Wang, Jinlei; Wang, Wenping; Chen, Yanzhao; Liu, Hui; Ji, Zhongping; Zhang, Caiming; Benes, Bedrich and Hauser, HelwigDifferent from other types of bas‐reliefs, a flower bas‐relief contains a large number of depth‐discontinuity edges. Most existing line‐based methods reconstruct free‐form surfaces by ignoring the depth‐discontinuities, thus are less efficient in modeling flower bas‐reliefs. This paper presents a neural‐based solution which benefits from the recent advances in CNN. Specially, we use line gradients to encode the depth orderings at leaf edges. Given a line drawing, a heuristic method is first proposed to compute 2D gradients at lines. Line gradients and dense curvatures interpolated from sparse user inputs are then fed into a neural network, which outputs depths and normals of the final bas‐relief. In addition, we introduce an object‐based method to generate flower bas‐reliefs and line drawings for network training. Extensive experiments show that our method is effective in modelling bas‐reliefs with depth‐discontinuity edges. User evaluation also shows that our method is intuitive and accessible to common users.Item Half‐body Portrait Relighting with Overcomplete Lighting Representation(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Song, Guoxian; Cham, Tat‐Jen; Cai, Jianfei; Zheng, Jianmin; Benes, Bedrich and Hauser, HelwigWe present a neural‐based model for relighting a half‐body portrait image by simply referring to another portrait image with the desired lighting condition. Rather than following classical inverse rendering methodology that involves estimating normals, albedo and environment maps, we implicitly encode the subject and lighting in a latent space, and use these latent codes to generate relighted images by neural rendering. A key technical innovation is the use of a novel overcomplete lighting representation, which facilitates lighting interpolation in the latent space, as well as helping regularize the self‐organization of the lighting latent space during training. In addition, we propose a novel multiplicative neural render that more effectively combines the subject and lighting latent codes for rendering. We also created a large‐scale photorealistic rendered relighting dataset for training, which allows our model to generalize well to real images. Extensive experiments demonstrate that our system not only outperforms existing methods for referral‐based portrait relighting, but also has the capability generate sequences of relighted images via lighting rotations.Item A Rapid, End‐to‐end, Generative Model for Gaseous Phenomena from Limited Views(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Qiu, Sheng; Li, Chen; Wang, Changbo; Qin, Hong; Benes, Bedrich and Hauser, HelwigDespite the rapid development and proliferation of computer graphics hardware devices for scene capture in the most recent decade, the high‐resolution 3D/4D acquisition of gaseous scenes (e.g., smokes) in real time remains technically challenging in graphics research nowadays. In this paper, we explore a hybrid approach to simultaneously taking advantage of both the model‐centric method and the data‐driven method. Specifically, this paper develops a novel conditional generative model to rapidly reconstruct the temporal density and velocity fields of gaseous phenomena based on the sequence of two projection views. With the data‐driven method, we can achieve the strong coupling of density update and the estimation of flow motion, as a result, we can greatly improve the reconstruction performance for smoke scenes. First, we employ a conditional generative network to generate the initial density field from input projection views and estimate the flow motion based on the adjacent frames. Second, we utilize the differentiable advection layer and design a velocity estimation network with the long‐term mechanism to help achieve the end‐to‐end training and more stable graphics effects. Third, we can re‐simulate the input scene with flexible coupling effects based on the estimated velocity field subject to artists' guidance or user interaction. Moreover, our generative model could accommodate single projection view as input. In practice, more input projection views are enabling and facilitating the high‐fidelity reconstruction with more realistic and finer details. We have conducted extensive experiments to confirm the effectiveness, efficiency, and robustness of our new method compared with the previous state‐of‐the‐art techniques.Item Optimized Processing of Localized Collisions in Projective Dynamics(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Wang, Qisi; Tao, Yutian; Brandt, Eric; Cutting, Court; Sifakis, Eftychios; Benes, Bedrich and Hauser, HelwigWe present a method for the efficient processing of contact and collision in volumetric elastic models simulated using the Projective Dynamics paradigm. Our approach enables interactive simulation of tetrahedral meshes with more than half a million elements, provided that the model satisfies two fundamental properties: the region of the model's surface that is susceptible to collision events needs to be known in advance, and the simulation degrees of freedom associated with that surface region should be limited to a small fraction (e.g. 5%) of the total simulation nodes. In such scenarios, a partial Cholesky factorization can abstract away the behaviour of the collision‐safe subset of the face model into the Schur Complement matrix with respect to the collision‐prone region. We demonstrate how fast and accurate updates of bilateral penalty‐based collision terms can be incorporated into this representation, and solved with high efficiency on the GPU. We also demonstrate iterating a partial update of the element rotations, akin to a selective application of the local step, specifically on the smaller collision‐prone region without explicitly paying the cost associated with the rest of the simulation mesh. We demonstrate efficient and robust interactive simulation in detailed models from animation and medical applications.Item NOVA: Rendering Virtual Worlds with Humans for Computer Vision Tasks(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Kerim, Abdulrahman; Aslan, Cem; Celikcan, Ufuk; Erdem, Erkut; Erdem, Aykut; Benes, Bedrich and Hauser, HelwigToday, the cutting edge of computer vision research greatly depends on the availability of large datasets, which are critical for effectively training and testing new methods. Manually annotating visual data, however, is not only a labor‐intensive process but also prone to errors. In this study, we present NOVA, a versatile framework to create realistic‐looking 3D rendered worlds containing procedurally generated humans with rich pixel‐level ground truth annotations. NOVA can simulate various environmental factors such as weather conditions or different times of day, and bring an exceptionally diverse set of humans to life, each having a distinct body shape, gender and age. To demonstrate NOVA's capabilities, we generate two synthetic datasets for person tracking. The first one includes 108 sequences, each with different levels of difficulty like tracking in crowded scenes or at nighttime and aims for testing the limits of current state‐of‐the‐art trackers. A second dataset of 97 sequences with normal weather conditions is used to show how our synthetic sequences can be utilized to train and boost the performance of deep‐learning based trackers. Our results indicate that the synthetic data generated by NOVA represents a good proxy of the real‐world and can be exploited for computer vision tasks.Item Estimating Garment Patterns from Static Scan Data(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Bang, Seungbae; Korosteleva, Maria; Lee, Sung‐Hee; Benes, Bedrich and Hauser, HelwigThe acquisition of highly detailed static 3D scan data for people in clothing is becoming widely available. Since 3D scan data is given as a single mesh without semantic separation, in order to animate the data, it is necessary to model shape and deformation behaviour of individual body and garment parts. This paper presents a new method for generating simulation‐ready garment models from 3D static scan data of clothed humans. A key contribution of our method is a novel approach to segmenting garments by finding optimal boundaries between the skin and garment. Our boundary‐based garment segmentation method allows for stable and smooth separation of garments by using an implicit representation of the boundary and its optimization strategy. In addition, we present a novel framework to construct a 2D pattern from the segmented garment and place it around the body for a draping simulation. The effectiveness of our method is validated by generating garment patterns for a number of scan data.Item Customized Summarizations of Visual Data Collections(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Yuan, Mengke; Ghanem, Bernard; Yan, Dong‐Ming; Wu, Baoyuan; Zhang, Xiaopeng; Wonka, Peter; Benes, Bedrich and Hauser, HelwigWe propose a framework to generate customized summarizations of visual data collections, such as collections of images, materials, 3D shapes, and 3D scenes. We assume that the elements in the visual data collections can be mapped to a set of vectors in a feature space, in which a fitness score for each element can be defined, and we pose the problem of customized summarizations as selecting a subset of these elements. We first describe the design choices a user should be able to specify for modeling customized summarizations and propose a corresponding user interface. We then formulate the problem as a constrained optimization problem with binary variables and propose a practical and fast algorithm based on the alternating direction method of multipliers (ADMM). Our results show that our problem formulation enables a wide variety of customized summarizations, and that our solver is both significantly faster than state‐of‐the‐art commercial integer programming solvers and produces better solutions than fast relaxation‐based solvers.Item Design and Evaluation of Visualization Techniques to Facilitate Argument Exploration(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Khartabil, D.; Collins, C.; Wells, S.; Bach, B.; Kennedy, J.; Benes, Bedrich and Hauser, HelwigThis paper reports the design and comparison of three visualizations to represent the structure and content within arguments. Arguments are artifacts of reasoning widely used across domains such as education, policy making, and science. An is made up of sequences of statements (premises) which can support or contradict each other, individually or in groups through Boolean operators. Understanding the resulting hierarchical structure of arguments while being able to read the arguments' text poses problems related to overview, detail, and navigation. Based on interviews with argument analysts we iteratively designed three techniques, each using combinations of tree visualizations (sunburst, icicle), content display (in‐situ, tooltip) and interactive navigation. Structured discussions with the analysts show benefits of each these techniques; for example, sunburst being good in presenting overview but showing arguments in‐situ is better than pop‐ups. A controlleduser study with 21 participants and three tasks shows complementary evidence suggesting that a sunburst with pop‐up for the content is the best trade‐off solution. Our results can inform visualizations within existing argument visualization tools and increase the visibility of ‘novel‐and‐effective’ visualizations in the argument visualization community.Item Deep Reflectance Scanning: Recovering Spatially‐varying Material Appearance from a Flash‐lit Video Sequence(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Ye, Wenjie; Dong, Yue; Peers, Pieter; Guo, Baining; Benes, Bedrich and Hauser, HelwigIn this paper we present a novel method for recovering high‐resolution spatially‐varying isotropic surface reflectance of a planar exemplar from a flash‐lit close‐up video sequence captured with a regular hand‐held mobile phone. We do not require careful calibration of the camera and lighting parameters, but instead compute a per‐pixel flow map using a deep neural network to align the input video frames. For each video frame, we also extract the reflectance parameters, and warp the neural reflectance features directly using the per‐pixel flow, and subsequently pool the warped features. Our method facilitates convenient hand‐held acquisition of spatially‐varying surface reflectance with commodity hardware by non‐expert users. Furthermore, our method enables aggregation of reflectance features from surface points visible in only a subset of the captured video frames, enabling the creation of high‐resolution reflectance maps that exceed the native camera resolution. We demonstrate and validate our method on a variety of synthetic and real‐world spatially‐varying materials.Item An Efficient Hybrid Optimization Strategy for Surface Reconstruction(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Bertolino, Giulia; Montemurro, Marco; Perry, Nicolas; Pourroy, Franck; Benes, Bedrich and Hauser, HelwigAn efficient surface reconstruction strategy is presented in this study, which is able to approximate non‐convex sets of target points (TPs). The approach is split in two phases: (a) the mapping phase, making use of the shape preserving method (SPM) to get a proper parametrization of each sub‐domain composing the TPs set; (b) the fitting phase, where each patch is fitted by means of a suitable non‐uniform rational basis spline (NURBS) surface by considering, as design variables, all parameters involved in its definition. To this purpose, the surface fitting problem is formulated as a constrained non‐linear programming problem (CNLPP) defined over a domain having changing dimension, wherein both the number and the value of the design variables are optimized. To deal with this CNLPP, the optimization process is split in two steps. Firstly, a special genetic algorithm (GA) optimizes both the value and the number of design variables by means of a two‐level evolution strategy (species and individuals). Secondly, the solution provided by the GA constitutes the initial guess for the deterministic optimization, which aims at improving the accuracy of the fitting surfaces. The effectiveness of the proposed methodology is proven through some meaningful benchmarks taken from the literature.Item Visual Analytics of Text Conversation Sentiment and Semantics(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Healey, Christopher G.; Dinakaran, Gowtham; Padia, Kalpesh; Nie, Shaoliang; Benson, J. Riley; Caira, Dave; Shaw, Dean; Catalfu, Gary; Devarajan, Ravi; Benes, Bedrich and Hauser, HelwigThis paper describes the design and implementation of a web‐based system to visualize large collections of text conversations integrated into a hierarchical four‐level‐of‐detail design. Viewers can visualize conversations: (1) in a streamgraph topic overview for a user‐specified time period; (2) as emotion patterns for a topic chosen from the streamgraph; (3) as semantic sequences for a user‐selected emotion pattern, and (4) as an emotion‐driven conversation graph for a single conversation. We collaborated with the Live Chatcustomer service group at SAS Institute to design and evaluate our system's strengths and limitations.Item Transfer Deep Learning for Reconfigurable Snapshot HDR Imaging Using Coded Masks(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Alghamdi, Masheal; Fu, Qiang; Thabet, Ali; Heidrich, Wolfgang; Benes, Bedrich and Hauser, HelwigHigh dynamic range (HDR) image acquisition from a single image capture, also known as snapshot HDR imaging, is challenging because the bit depths of camera sensors are far from sufficient to cover the full dynamic range of the scene. Existing HDR techniques focus either on algorithmic reconstruction or hardware modification to extend the dynamic range. In this paper we propose a joint design for snapshot HDR imaging by devising a spatially varying modulation mask in the hardware and building a deep learning algorithm to reconstruct the HDR image. We leverage transfer learning to overcome the lack of sufficiently large HDR datasets available. We show how transferring from a different large‐scale task (image classification on ImageNet) leads to considerable improvements in HDR reconstruction. We achieve a reconfigurable HDR camera design that does not require custom sensors, and instead can be reconfigured between HDR and conventional mode with very simple calibration steps. We demonstrate that the proposed hardware–software so lution offers a flexible yet robust way to modulate per‐pixel exposures, and the network requires little knowledge of the hardware to faithfully reconstruct the HDR image. Comparison results show that our method outperforms the state of the art in terms of visual perception quality.Item Self‐Supervised Learning of Part Mobility from Point Cloud Sequence(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Shi, Yahao; Cao, Xinyu; Zhou, Bin; Benes, Bedrich and Hauser, HelwigPart mobility analysis is a significant aspect required to achieve a functional understanding of 3D objects. It would be natural to obtain part mobility from the continuous part motion of 3D objects. In this study, we introduce a self‐supervised method for segmenting motion parts and predicting their motion attributes from a point cloud sequence representing a dynamic object. To sufficiently utilize spatiotemporal information from the point cloud sequence, we generate trajectories by using correlations among successive frames of the sequence instead of directly processing the point clouds. We propose a novel neural network architecture called PointRNN to learn feature representations of trajectories along with their part rigid motions. We evaluate our method on various tasks including motion part segmentation, motion axis prediction and motion range estimation. The results demon strate that our method outperforms previous techniques on both synthetic and real datasets. Moreover, our method has the ability to generalize to new and unseen objects. It is important to emphasize that it is not required to know any prior shape structure, prior shape category information or shape orientation. To the best of our knowledge, this is the first study on deep learning to extract part mobility from point cloud sequence of a dynamic object.