Volume 38 (2019)
Permanent URI for this community
Browse
Browsing Volume 38 (2019) by Issue Date
Now showing 1 - 20 of 267
Results Per Page
Sort Options
Item Style Mixer: Semantic-aware Multi-Style Transfer Network(The Eurographics Association and John Wiley & Sons Ltd., 2019) HUANG, Zixuan; ZHANG, Jinghuai; LIAO, Jing; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonRecent neural style transfer frameworks have obtained astonishing visual quality and flexibility in Single-style Transfer (SST), but little attention has been paid to Multi-style Transfer (MST) which refers to simultaneously transferring multiple styles to the same image. Compared to SST, MST has the potential to create more diverse and visually pleasing stylization results. In this paper, we propose the first MST framework to automatically incorporate multiple styles into one result based on regional semantics. We first improve the existing SST backbone network by introducing a novel multi-level feature fusion module and a patch attention module to achieve better semantic correspondences and preserve richer style details. For MST, we designed a conceptually simple yet effective region-based style fusion module to insert into the backbone. It assigns corresponding styles to content regions based on semantic matching, and then seamlessly combines multiple styles together. Comprehensive evaluations demonstrate that our framework outperforms existing works of SST and MST.Item Superpixel Generation by Agglomerative Clustering With Quadratic Error Minimization(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Dong, Xiao; Chen, Zhonggui; Yao, Junfeng; Guo, Xiaohu; Chen, Min and Benes, BedrichSuperpixel segmentation is a popular image pre‐processing technique in many computer vision applications. In this paper, we present a novel superpixel generation algorithm by agglomerative clustering with quadratic error minimization. We use a quadratic error metric (QEM) to measure the difference of spatial compactness and colour homogeneity between superpixels. Based on the quadratic function, we propose a bottom‐up greedy clustering algorithm to obtain higher quality superpixel segmentation. There are two steps in our algorithm: merging and swapping. First, we calculate the merging cost of two superpixels and iteratively merge the pair with the minimum cost until the termination condition is satisfied. Then, we optimize the boundary of superpixels by swapping pixels according to their swapping cost to improve the compactness. Due to the quadratic nature of the energy function, each of these atomic operations has only (1) time complexity. We compare the new method with other state‐of‐the‐art superpixel generation algorithms on two datasets, and our algorithm demonstrates superior performance.Superpixel segmentation is a popular image pre‐processing technique in many computer vision applications. In this paper, we present a novel superpixel generation algorithm by agglomerative clustering with quadratic error minimization. We use a quadratic error metric (QEM) to measure the difference of spatial compactness and colour homogeneity between superpixels. Based on the quadratic function, we propose a bottom‐up greedy clustering algorithm to obtain higher quality superpixel segmentation. There are two steps in our algorithm: merging and swapping. First, we calculate the merging cost of two superpixels and iteratively merge the pair with the minimum cost until the termination condition is satisfied. Then, we optimize the boundary of superpixels by swapping pixels according to their swapping cost to improve the compactness. Due to the quadratic nature of the energy function, each of these atomic operations has only O(1) time complexity. We compare the new method with other state‐of‐the‐art superpixel generation algorithms on two datasets, and our algorithm demonstrates superior performance.Item A Subspace Method for Fast Locally Injective Harmonic Mapping(The Eurographics Association and John Wiley & Sons Ltd., 2019) Hefetz, Eden Fedida; Chien, Edward; Weber, Ofir; Alliez, Pierre and Pellacini, FabioWe present a fast algorithm for low-distortion locally injective harmonic mappings of genus 0 triangle meshes with and without cone singularities. The algorithm consists of two portions, a linear subspace analysis and construction, and a nonlinear nonconvex optimization for determination of a mapping within the reduced subspace. The subspace is the space of solutions to the Harmonic Global Parametrization (HGP) linear system [BCW17], and only vertex positions near cones are utilized, decoupling the variable count from the mesh density. A key insight shows how to construct the linear subspace at a cost comparable to that of a linear solve, extracting a very small set of elements from the inverse of the matrix without explicitly calculating it. With a variable count on the order of the number of cones, a tangential alternating projection method [HCW17] and a subsequent Newton optimization [CW17] are used to quickly find a low-distortion locally injective mapping. This mapping determination is typically much faster than the subspace construction. Experiments demonstrating its speed and efficacy are shown, and we find it to be an order of magnitude faster than HGP and other alternatives.Item Microfacet Model Regularization for Robust Light Transport(The Eurographics Association and John Wiley & Sons Ltd., 2019) Jendersie, Johannes; Grosch, Thorsten; Boubekeur, Tamy and Sen, PradeepToday, Monte Carlo light transport algorithms are used in many applications to render realistic images. Depending on the complexity of the used methods, several light effects can or cannot be found by the sampling process. Especially, specular and smooth glossy surfaces often lead to high noise and missing light effects. Path space regularization provides a solution, improving any sampling algorithm, by modifying the material evaluation code. Previously, Kaplanyan and Dachsbacher [KD13] introduced the concept for pure specular interactions. We extend this idea to the commonly used microfacet models by manipulating the roughness parameter prior to the evaluation. We also show that this kind of regularization requires a change in the MIS weight computation and provide the solution. Finally, we propose two heuristics to adaptively reduce the introduced bias. Using our method, many complex light effects are reproduced and the fidelity of smooth objects is increased. Additionally, if a path was sampleable before, the variance is partially reduced.Item Neural BTF Compression and Interpolation(The Eurographics Association and John Wiley & Sons Ltd., 2019) Rainer, Gilles; Jakob, Wenzel; Ghosh, Abhijeet; Weyrich, Tim; Alliez, Pierre and Pellacini, FabioThe Bidirectional Texture Function (BTF) is a data-driven solution to render materials with complex appearance. A typical capture contains tens of thousands of images of a material sample under varying viewing and lighting conditions.While capable of faithfully recording complex light interactions in the material, the main drawback is the massive memory requirement, both for storing and rendering, making effective compression of BTF data a critical component in practical applications. Common compression schemes used in practice are based on matrix factorization techniques, which preserve the discrete format of the original dataset. While this approach generalizes well to different materials, rendering with the compressed dataset still relies on interpolating between the closest samples. Depending on the material and the angular resolution of the BTF, this can lead to blurring and ghosting artefacts. An alternative approach uses analytic model fitting to approximate the BTF data, using continuous functions that naturally interpolate well, but whose expressive range is often not wide enough to faithfully recreate materials with complex non-local lighting effects (subsurface scattering, inter-reflections, shadowing and masking...). In light of these observations, we propose a neural network-based BTF representation inspired by autoencoders: our encoder compresses each texel to a small set of latent coefficients, while our decoder additionally takes in a light and view direction and outputs a single RGB vector at a time. This allows us to continuously query reflectance values in the light and view hemispheres, eliminating the need for linear interpolation between discrete samples. We train our architecture on fabric BTFs with a challenging appearance and compare to standard PCA as a baseline. We achieve competitive compression ratios and high-quality interpolation/extrapolation without blurring or ghosting artifacts.Item EUROGRAPHICS 2019: CGF 38-2 Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2019) Alliez, Pierre; Pellacini, Fabio; Alliez, Pierre and Pellacini, Fabio-Item Wide Gamut Spectral Upsampling with Fluorescence(The Eurographics Association and John Wiley & Sons Ltd., 2019) Jung, Alisa; Wilkie, Alexander; Hanika, Johannes; Jakob, Wenzel; Dachsbacher, Carsten; Boubekeur, Tamy and Sen, PradeepPhysically based spectral rendering has become increasingly important in recent years. However, asset textures in such systems are usually still drawn or acquired as RGB tristimulus values. While a number of RGB to spectrum upsampling techniques are available, none of them support upsampling of all colours in the full spectral locus, as it is intrinsically bigger than the gamut of physically valid reflectance spectra. But with display technology moving to increasingly wider gamuts, the ability to achieve highly saturated colours becomes an increasingly important feature. Real materials usually exhibit smooth reflectance spectra, while computationally generated spectra become more blocky as they represent increasingly bright and saturated colours. In print media, plastic or textile design, fluorescent dyes are added to extend the boundaries of the gamut of reflectance spectra. We follow the same approach for rendering: we provide a method which, given an input RGB tristimulus value, automatically provides a mixture of a regular, smooth reflectance spectrum plus a fluorescent part. For highly saturated input colours, the combination yields an improved reconstruction compared to what would be possible relying on a reflectance spectrum alone. At the core of our technique is a simple parametric spectral model for reflectance, excitation, and emission that allows for compact storage and is compatible with texture mapping. The model can then be used as a fluorescent diffuse component in an existing more complex BRDF model. We also provide importance sampling routines for practical application in a path tracer.Item Bird's-Eye - Large-Scale Visual Analytics of City Dynamics using Social Location Data(The Eurographics Association and John Wiley & Sons Ltd., 2019) Krueger, Robert; Han, Qi; Ivanov, Nikolay; Mahtal, Sanae; Thom, Dennis; Pfister, Hanspeter; Ertl, Thomas; Gleicher, Michael and Viola, Ivan and Leitte, HeikeThe analysis of behavioral city dynamics, such as temporal patterns of visited places and citizens' mobility routines, is an essential task for urban and transportation planning. Social media applications such as Foursquare and Twitter provide access to large-scale and up-to-date dynamic movement data that not only help to understand the social life and pulse of a city but also to maintain and improve urban infrastructure. However, the fast growth rate of this data poses challenges for conventional methods to provide up-to-date, flexible analysis. Therefore, planning authorities barely consider it. We present a system and design study to leverage social media data that assist urban and transportation planners to achieve better monitoring and analysis of city dynamics such as visited places and mobility patterns in large metropolitan areas. We conducted a goal-and-task analysis with urban planning experts. To address these goals, we designed a system with a scalable data monitoring back-end and an interactive visual analytics interface. The monitoring component uses intelligent pre-aggregation to allow dynamic queries in near real-time. The visual analytics interface leverages unsupervised learning to reveal clusters, routines, and unusual behavior in massive data, allowing to understand patterns in time and space. We evaluated our approach based on a qualitative user study with urban planning experts which demonstrates that intuitive integration of advanced analytical tools with visual interfaces is pivotal in making behavioral city dynamics accessible to practitioners. Our interviews also revealed areas for future research.Item Image Composition of Partially Occluded Objects(The Eurographics Association and John Wiley & Sons Ltd., 2019) Tan, Xuehan; Xu, Panpan; Guo, Shihui; Wang, Wencheng; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonImage composition extracts the content of interest (COI) from a source image and blends it into a target image to generate a new image. In the majority of existing works, the COI is manually extracted and then overlaid on top of the target image. However, in practice, it is often necessary to deal with situations in which the COI is partially occluded by the target image content. In this regard, both tasks of extracting the COI and cropping its occluded part require intensive user interactions, which are laborious and seriously reduce the composition efficiency. This paper addresses the aforementioned challenges by proposing an efficient image composition method. First, we extract the semantic contents of the images by using state-of-the-art deep learning methods. Therefore, the COI can be selected with clicks only, which can greatly reduce the demanded user interactions. Second, according to the user's operations (such as translation or scale) on the COI, we can effectively infer the occlusion relationships between the COI and the contents of the target image. Thus, the COI can be adaptively embedded into the target image without concern about cropping its occluded part. Therefore, the procedures of content extraction and occlusion handling can be significantly simplified, and work efficiency is remarkably improved. Experimental results show that compared to existing works, our method can reduce the number of user interactions to approximately one-tenth and increase the speed of image composition by more than ten times.Item Procedural Tectonic Planets(The Eurographics Association and John Wiley & Sons Ltd., 2019) Cortial, Yann; Peytavie, Adrien; Galin, Eric; Guérin, Eric; Alliez, Pierre and Pellacini, FabioWe present a procedural method for authoring synthetic tectonic planets. Instead of relying on computationally demanding physically-based simulations, we capture the fundamental phenomena into a procedural method that faithfully reproduces largescale planetary features generated by the movement and collision of the tectonic plates. We approximate complex phenomena such as plate subduction or collisions to deform the lithosphere, including the continental and oceanic crusts. The user can control the movement of the plates, which dynamically evolve and generate a variety of landforms such as continents, oceanic ridges, large scale mountain ranges or island arcs. Finally, we amplify the large-scale planet model with either procedurallydefined or real-world elevation data to synthesize coherent detailed reliefs. Our method allows the user to control the evolution of an entire planet interactively, and to trigger specific events such as catastrophic plate rifting.Item Stylized Image Triangulation(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Lawonn, Kai; Günther, Tobias; Chen, Min and Benes, BedrichThe art of representing images with triangles is known as image triangulation, which purposefully uses abstraction and simplification to guide the viewer's attention. The manual creation of image triangulations is tedious and thus several tools have been developed in the past that assist in the placement of vertices by means of image feature detection and subsequent Delaunay triangulation. In this paper, we formulate the image triangulation process as an optimization problem. We provide an interactive system that optimizes the vertex locations of an image triangulation to reduce the root mean squared approximation error. Along the way, the triangulation is incrementally refined by splitting triangles until certain refinement criteria are met. Thereby, the calculation of the energy gradients is expensive and thus we propose an efficient rasterization‐based GPU implementation. To ensure that artists have control over details, the system offers a number of direct and indirect editing tools that split, collapse and re‐triangulate selected parts of the image. For final display, we provide a set of rendering styles, including constant colours, linear gradients, tonal art maps and textures. Finally, we demonstrate temporal coherence for animations and compare our method with existing image triangulation tools.Item Learning to Predict Image-based Rendering Artifacts with Respect to a Hidden Reference Image(The Eurographics Association and John Wiley & Sons Ltd., 2019) Bemana, Mojtaba; Keinert, Joachim; Myszkowski, Karol; Bätz, Michel; Ziegler, Matthias; Seidel, Hans-Peter; Ritschel, Tobias; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonImage metrics predict the perceived per-pixel difference between a reference image and its degraded (e. g., re-rendered) version. In several important applications, the reference image is not available and image metrics cannot be applied. We devise a neural network architecture and training procedure that allows predicting the MSE, SSIM or VGG16 image difference from the distorted image alone while the reference is not observed. This is enabled by two insights: The first is to inject sufficiently many un-distorted natural image patches, which can be found in arbitrary amounts and are known to have no perceivable difference to themselves. This avoids false positives. The second is to balance the learning, where it is carefully made sure that all image errors are equally likely, avoiding false negatives. Surprisingly, we observe that the resulting no-reference metric, subjectively, can even perform better than the reference-based one, as it had to become robust against mis-alignments. We evaluate the effectiveness of our approach in an image-based rendering context, both quantitatively and qualitatively. Finally, we demonstrate two applications which reduce light field capture time and provide guidance for interactive depth adjustment.Item Optimizing Stepwise Animation in Dynamic Set Diagrams(The Eurographics Association and John Wiley & Sons Ltd., 2019) Mizuno, Kazuyo; WU, Hsiang-Yun; Takahashi, Shigeo; Igarashi, Takeo; Gleicher, Michael and Viola, Ivan and Leitte, HeikeA set diagram represents the membership relation among data elements. It is often visualized as secondary information on top of primary information, such as the spatial positions of elements on maps and charts. Visualizing the temporal evolution of such set diagrams as well as their primary features is quite important; however, conventional approaches have only focused on the temporal behavior of the primary features and do not provide an effective means to highlight notable transitions within the set relationships. This paper presents an approach for generating a stepwise animation between set diagrams by decomposing the entire transition into atomic changes associated with individual data elements. The key idea behind our approach is to optimize the ordering of the atomic changes such that the synthesized animation minimizes unwanted set occlusions by considering their depth ordering and reduces the gaze shift between two consecutive stepwise changes. Experimental results and a user study demonstrate that the proposed approach effectively facilitates the visual identification of the detailed transitions inherent in dynamic set diagrams.Item Two-phase Hair Image Synthesis by Self-Enhancing Generative Model(The Eurographics Association and John Wiley & Sons Ltd., 2019) Qiu, Haonan; Wang, Chuan; Zhu, Hang; zhu, xiangyu; Gu, Jinjin; Han, Xiaoguang; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonGenerating plausible hair image given limited guidance, such as sparse sketches or low-resolution image, has been made possible with the rise of Generative Adversarial Networks (GANs). Traditional image-to-image translation networks can generate recognizable results, but finer textures are usually lost and blur artifacts commonly exist. In this paper, we propose a two-phase generative model for high-quality hair image synthesis. The two-phase pipeline first generates a coarse image by an existing image translation model, then applies a re-generating network with self-enhancing capability to the coarse image. The selfenhancing capability is achieved by a proposed differentiable layer, which extracts the structural texture and orientation maps from a hair image. Extensive experiments on two tasks, Sketch2Hair and Hair Super-Resolution, demonstrate that our approach is able to synthesize plausible hair image with finer details, and reaches the state-of-the-art.Item Topology Preserving Simplification of Medial Axes in 3D Models(The Eurographics Association and John Wiley & Sons Ltd., 2019) Chu, Yiyao; Hou, Fei; Wang, Wencheng; Li, Lei; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonWe propose an efficient method for topology-preserving simplification of medial axes of 3D models. Existing methods either cannot preserve the topology during medial axes simplification or have the problem of being geometrically inaccurate or computationally expensive. To tackle these issues, we restrict our topology-checking to the areas around the topological holes to avoid unnecessary checks in other areas. Our algorithm can keep high precision even when the medial axis is simplified to be in very few vertices. Furthermore, we parallelize the medial axes simplification procedure to enhance the performance significantly. Experimental results show that our method can preserve the topology with highly efficient performance, much superior to the existing methods in terms of topology preservation, accuracy and performance.Item Practical Person-Specific Eye Rigging(The Eurographics Association and John Wiley & Sons Ltd., 2019) Bérard, Pascal; Bradley, Derek; Gross, Markus; Beeler, Thabo; Alliez, Pierre and Pellacini, FabioWe present a novel parametric eye rig for eye animation, including a new multi-view imaging system that can reconstruct eye poses at submillimeter accuracy to which we fit our new rig. This allows us to accurately estimate person-specific eyeball shape, rotation center, interocular distance, visual axis, and other rig parameters resulting in an animation-ready eye rig. We demonstrate the importance of several aspects of eye modeling that are often overlooked, for example that the visual axis is not identical to the optical axis, that it is important to model rotation about the optical axis, and that the rotation center of the eye should be measured accurately for each person. Since accurate rig fitting requires hand annotation of multi-view imagery for several eye gazes, we additionally propose a more user-friendly ''lightweight'' fitting approach, which leverages an average rig created from several pre-captured accurate rigs. Our lightweight rig fitting method allows for the estimation of eyeball shape and eyeball position given only a single pose with a known look-at point (e.g. looking into a camera) and few manual annotations.Item ManyLands: A Journey Across 4D Phase Space of Trajectories(The Eurographics Association and John Wiley & Sons Ltd., 2019) Amirkhanov, Aleksandr; Kosiuk, Ilona; Szmolyan, Peter; Amirkhanov, Artem; Mistelbauer, Gabriel; Gröller, Eduard; Raidou, Renata Georgia; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonMathematical models of ordinary differential equations are used to describe and understand biological phenomena. These models are dynamical systems that often describe the time evolution of more than three variables, i.e., their dynamics take place in a multi-dimensional space, called the phase space. Currently, mathematical domain scientists use plots of typical trajectories in the phase space to analyze the qualitative behavior of dynamical systems. These plots are called phase portraits and they perform well for 2D and 3D dynamical systems. However, for 4D, the visual exploration of trajectories becomes challenging, as simple subspace juxtaposition is not sufficient. We propose ManyLands to support mathematical domain scientists in analyzing 4D models of biological systems. By describing the subspaces as Lands, we accompany domain scientists along a continuous journey through 4D HyperLand, 3D SpaceLand, and 2D FlatLand, using seamless transitions. The Lands are also linked to 1D TimeLines. We offer an additional dissected view of trajectories that relies on small-multiple compass-alike pictograms for easy navigation across subspaces and trajectory segments of interest. We show three use cases of 4D dynamical systems from cell biology and biochemistry. An informal evaluation with mathematical experts confirmed that ManyLands helps them to visualize and analyze complex 4D dynamics, while facilitating mathematical experiments and simulations.Item Combining Point and Line Samples for Direct Illumination(The Eurographics Association and John Wiley & Sons Ltd., 2019) Salesin, Katherine; Jarosz, Wojciech; Boubekeur, Tamy and Sen, PradeepWe develop a unified framework for combining point and line samples in direct lighting calculations. While line samples have proven beneficial in a variety of rendering contexts, their application in direct lighting has been limited due to a lack of formulas for evaluating advanced BRDFs along a line and performance tied to the orientation of occluders in the scene. We lift these limitations by elevating line samples to a shared higher-dimensional space with point samples. Our key insight is to separate the probability distribution functions of line samples and points that lie along a line sample. This simple conceptual change allows us to apply multiple importance sampling (MIS) between points and lines, and lines with each other, in order to leverage their respective strengths. We also show how to improve the convergence rate of MIS between points and lines in an unbiased way using a novel discontinuity-smoothing balance heuristic. We verify through a set of rendering experiments that our proposed MISing of points and lines, and lines with each other, reduces variance of the direct lighting estimate while supporting an increased range of BSDFs compared to analytic line integration.Item A Stable Graph Layout Algorithm for Processes(The Eurographics Association and John Wiley & Sons Ltd., 2019) Mennens, Robin; Scheepens, Roeland; Westenberg, Michel; Gleicher, Michael and Viola, Ivan and Leitte, HeikeProcess mining enables organizations to analyze data about their (business) processes. Visualization is key to gaining insight into these processes and the associated data. Process visualization requires a high-quality graph layout that intuitively represents the semantics of the process. Process analysis additionally requires interactive filtering to explore the process data and process graph. The ideal process visualization therefore provides a high-quality, intuitive layout and preserves the mental map of the user during the visual exploration. The current industry standard used for process visualization does not satisfy either of these requirements. In this paper, we propose a novel layout algorithm for processes based on the Sugiyama framework. Our approach consists of novel ranking and order constraint algorithms and a novel crossing minimization algorithm. These algorithms make use of the process data to compute stable, high-quality layouts. In addition, we use phased animation to further improve mental map preservation. Quantitative and qualitative evaluations show that our approach computes layouts of higher quality and preserves the mental map better than the industry standard. Additionally, our approach is substantially faster, especially for graphs with more than 250 edges.Item HMLFC: Hierarchical Motion-Compensated Light Field Compression for Interactive Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2019) Pratapa, Srihari; Manocha, Dinesh; Steinberger, Markus and Foley, TimWe present a new motion-compensated hierarchical compression scheme (HMLFC) for encoding light field images (LFI) that is suitable for interactive rendering. Our method combines two different approaches, motion compensation schemes and hierarchical compression methods, to exploit redundancies in LFI. The motion compensation schemes capture the redundancies in local regions of the LFI efficiently (local coherence) and the hierarchical schemes capture the redundancies present across the entire LFI (global coherence). Our hybrid approach combines the two schemes effectively capturing both local as well as global coherence to improve the overall compression rate. We compute a tree from LFI using a hierarchical scheme and use phase shifted motion compensation techniques at each level of the hierarchy. Our representation provides random access to the pixel values of the light field, which makes it suitable for interactive rendering applications using a small run-time memory footprint. Our approach is GPU friendly and allows parallel decoding of LF pixel values. We highlight the performance on the two-plane parameterized light fields and obtain a compression ratio of 30-800x with a PSNR of 40-45 dB. Overall, we observe a 2-5x improvement in compression rates using HMLFC over prior light field compression schemes that provide random access capability. In practice, our algorithm can render new views of resolution 512x512 on an NVIDIA GTX-980 at around 200 fps.