37-Issue 1
Permanent URI for this collection
Browse
Browsing 37-Issue 1 by Issue Date
Now showing 1 - 20 of 34
Results Per Page
Sort Options
Item Story Albums: Creating Fictional Stories From Personal Photograph Sets(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Radiano, O.; Graber, Y.; Mahler, M.; Sigal, L.; Shamir, A.; Chen, Min and Benes, BedrichWe present a method for the automatic creation of fictional storybooks based on personal photographs. Unlike previous attempts that summarize such collections by picking salient or diverse photos, or creating personal literal narratives, we focus on the creation of fictional stories. This provides new value to users, as well as an engaging way for people (especially children) to experience their own photographs. We use a graph model to represent an artist‐generated story, where each node is a ‘frame’, akin to frames in comics or storyboards. A node is described by story elements, comprising actors, location, supporting objects and time. The edges in the graph encode connections between these elements and provide the discourse of the story. Based on this construction, we develop a constraint satisfaction algorithm for one‐to‐one assignment of nodes to photographs. Once each node is assigned to a photograph, a visual depiction of the story can be generated in different styles using various templates. We show results of several fictional visual stories created from different personal photo sets and in different styles.We present a method for the automatic creation of fictional storybooks based on personal photographs. Unlike previous attempts that summarize such collections by picking salient or diverse photos, or creating personal literal narratives, we focus on the creation of fictional stories. This provides new value to users, as well as an engaging way for people (especially children) to experience their own photographs. We use a graph model to represent an artist‐generated story, where each node is a ‘frame’, akin to frames in comics or storyboards. A node is described by story elements, comprising actors, location, supporting objects and time. The edges in the graph encode connections between these elements and provide the discourse of the story. Based on this construction, we develop a constraint satisfaction algorithm for one‐to‐one assignment of nodes to photographs. Once each node is assigned to a photograph, a visual depiction of the story can be generated in different styles using various templates.Item Large‐Scale Pixel‐Precise Deferred Vector Maps(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Thöny, Matthias; Billeter, Markus; Pajarola, Renato; Chen, Min and Benes, BedrichRendering vector maps is a key challenge for high‐quality geographic visualization systems. In this paper, we present a novel approach to visualize vector maps over detailed terrain models in a pixel‐precise way. Our method proposes a deferred line rendering technique to display vector maps directly in a screen‐space shading stage over the 3D terrain visualization. Due to the absence of traditional geometric polygonal rendering, our algorithm is able to outperform conventional vector map rendering algorithms for geographic information systems, and supports advanced line anti‐aliasing as well as slope distortion correction. Furthermore, our deferred line rendering enables interactively customizable advanced vector styling methods as well as a tool for interactive pixel‐based editing operations.Rendering vector maps is a key challenge for high‐quality geographic visualization systems. In this paper, we present a novel approach to visualize vector maps over detailed terrain models in a pixel‐precise way. Our method proposes a deferred line rendering technique to display vector maps directly in a screen‐space shading stage over the 3D terrain visualization. Due to the absence of traditional geometric polygonal rendering, our algorithm is able to outperform conventional vector map rendering algorithms for geographic information systems, and supports advanced line anti‐aliasing as well as slope distortion correction. Furthermore, our deferred line rendering enables interactively customizable advanced vector styling methods as well as a tool for interactive pixel‐based editing operations.Item Enhanced Visualization of Detected 3D Geometric Differences(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Palma, Gianpaolo; Sabbadin, Manuele; Corsini, Massimiliano; Cignoni, Paolo; Chen, Min and Benes, BedrichThe wide availability of 3D acquisition devices makes viable their use for shape monitoring. The current techniques for the analysis of time‐varying data can efficiently detect actual significant geometric changes and rule out differences due to irrelevant variations (such as sampling, lighting and coverage). On the other hand, the effective visualization of such detected changes can be challenging when we want to show at the same time the original appearance of the 3D model. In this paper, we propose a dynamic technique for the effective visualization of detected differences between two 3D scenes. The presented approach, while retaining the original appearance, allows the user to switch between the two models in a way that enhances the geometric differences that have been detected as significant. Additionally, the same technique is able to visually hides the other negligible, yet visible, variations. The main idea is to use two distinct screen space time‐based interpolation functions for the significant 3D differences and for the small variations to hide. We have validated the proposed approach in a user study on a different class of datasets, proving the objective and subjective effectiveness of the method.The wide availability of 3D acquisition devices makes viable their use for shape monitoring. The current techniques for the analysis of time‐varying data can efficiently detect actual significant geometric changes and rule out differences due to irrelevant variations (such as sampling, lighting and coverage). On the other hand, the effective visualization of such detected changes can be challenging when we want to show at the same time the original appearance of the 3D model. In this paper, we propose a dynamic technique for the effective visualization of detected differences between two 3D scenes.Item An Efficient Hybrid Incompressible SPH Solver with Interface Handling for Boundary Conditions(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Takahashi, Tetsuya; Dobashi, Yoshinori; Nishita, Tomoyuki; Lin, Ming C.; Chen, Min and Benes, BedrichWe propose a hybrid smoothed particle hydrodynamics solver for efficientlysimulating incompressible fluids using an interface handling method for boundary conditions in the pressure Poisson equation. We blend particle density computed with one smooth and one spiky kernel to improve the robustness against both fluid–fluid and fluid–solid collisions. To further improve the robustness and efficiency, we present a new interface handling method consisting of two components: free surface handling for Dirichlet boundary conditions and solid boundary handling for Neumann boundary conditions. Our free surface handling appropriately determines particles for Dirichlet boundary conditions using Jacobi‐based pressure prediction while our solid boundary handling introduces a new term to ensure the solvability of the linear system. We demonstrate that our method outperforms the state‐of‐the‐art particle‐based fluid solvers.We propose a hybrid smoothed particle hydrodynamics solver for efficiently simulating incompressible fluids using an interface handling method for boundary conditions in the pressure Poisson equation. We blend particle density computed with one smooth and one spiky kernel to improve the robustness against both fluid–fluid and fluid–solid collisions.To further improve the robustness and efficiency, we present a new interface handling method consisting of two components: free surface handling for Dirichlet boundary conditions and solid boundary handling for Neumann boundary conditions.Item Olfaction and Selective Rendering(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Harvey, Carlo; Bashford‐Rogers, Thomas; Debattista, Kurt; Doukakis, Efstratios; Chalmers, Alan; Chen, Min and Benes, BedrichAccurate simulation of all the senses in virtual environments is a computationally expensive task. Visual saliency models have been used to improve computational performance for rendered content, but this is insufficient for multi‐modal environments. This paper considers cross‐modal perception and, in particular, if and how olfaction affects visual attention. Two experiments are presented in this paper. Firstly, eye tracking is gathered from a number of participants to gain an impression about where and how they view virtual objects when smell is introduced compared to an odourless condition. Based on the results of this experiment a new type of saliency map in a selective‐rendering pipeline is presented. A second experiment validates this approach, and demonstrates that participants rank images as better quality, when compared to a reference, for the same rendering budget.Accurate simulation of all the senses in virtual environments is a computationally expensive task. Visual saliency models have been used to improve computational performance for rendered content, but this is insufficient for multi‐modal environments. This paper considers cross‐modal perception and, in particular, if and how olfaction affects visual attention. Two experiments are presented in this paper. Firstly, eye tracking is gathered from a number of participants to gain an impression about where and how they view virtual objects when smell is introduced compared to an odourless condition.Item Enhancing the Realism of Sketch and Painted Portraits With Adaptable Patches(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Lee, Yin‐Hsuan; Chang, Yu‐Kai; Chang, Yu‐Lun; Lin, I‐Chen; Wang, Yu‐Shuen; Lin, Wen‐Chieh; Chen, Min and Benes, BedrichRealizing unrealistic faces is a complicated task that requires a rich imagination and comprehension of facial structures. When face matching, warping or stitching techniques are applied, existing methods are generally incapable of capturing detailed personal characteristics, are disturbed by block boundary artefacts, or require painting‐photo pairs for training. This paper presents a data‐driven framework to enhance the realism of sketch and portrait paintings based only on photo samples. It retrieves the optimal patches of adaptable shapes and numbers according to the content of the input portrait and collected photos. These patches are then seamlessly stitched by chromatic gain and offset compensation and multi‐level blending. Experiments and user evaluations show that the proposed method is able to generate realistic and novel results for a moderately sized photo collection.Realizing unrealistic faces is a complicated task that requires a rich imagination and comprehension of facial structures. When face matching, warping or stitching techniques are applied, existing methods are generally incapable of capturing detailed personal characteristics, are disturbed by block boundary artefacts, or require painting‐photo pairs for training. This paper presents a data‐driven framework to enhance the realism of sketch and portrait paintings based only on photo samples. It retrieves the optimal patches of adaptable shapes and numbers according to the content of the input portrait and collected photos. These patches are then seamlessly stitched by chromatic gain and offset compensation and multi‐level blending.Item Realistic Ultrasound Simulation of Complex Surface Models Using Interactive Monte‐Carlo Path Tracing(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Mattausch, Oliver; Makhinya, Maxim; Goksel, Orcun; Chen, Min and Benes, BedrichRay‐based simulations have been shown to generate impressively realistic ultrasound images in interactive frame rates. Recent efforts used GPU‐based surface raytracing to simulate complex ultrasound interactions such as multiple reflections and refractions. These methods are restricted to perfectly specular reflections (i.e. following only a single reflective/refractive ray), whereas real tissue exhibits roughness of varying degree at tissue interfaces, causing partly diffuse reflections and refractions. Such surface interactions are significantly more complex and can in general not be handled by conventional deterministic raytracing approaches. However, these can be efficiently computed by Monte‐Carlo sampling techniques, where many ray paths are generated with respect to a probability distribution. In this paper, we introduce Monte‐Carlo raytracing for ultrasound simulation. This enables the realistic simulation of ultrasound‐tissue interactions such as soft shadows and fuzzy reflections. We discuss how to properly weight the contribution of each ray path in order to simulate the behaviour of a beamformed ultrasound signal. Tracing many individual rays per transducer element is easily parallelizable on modern GPUs, as opposed to previous approaches based on recursive binary raytracing. We further propose a significant performance optimization based on adaptive sampling.Ray‐based simulations have been shown to generate impressively realistic ultrasound images in interactive frame rates. Recent efforts used GPU‐based surface raytracing to simulate complex ultrasound interactions such as multiple reflections and refractions. These methods are restricted to perfectly specular reflections (i.e. following only a single reflective/refractive ray), whereas real tissue exhibits roughness of varying degree at tissue interfaces, causing partly diffuse reflections and refractions. Such surface interactions are significantly more complex and can in general not be handled by conventional deterministic raytracing approaches.Item Peridynamics‐Based Fracture Animation for Elastoplastic Solids(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Chen, Wei; Zhu, Fei; Zhao, Jing; Li, Sheng; Wang, Guoping; Chen, Min and Benes, BedrichIn this paper, we exploit the use of peridynamics theory for graphical animation of material deformation and fracture. We present a new meshless framework for elastoplastic constitutive modelling that contrasts with previous approaches in graphics. Our peridynamics‐based elastoplasticity model represents deformation behaviours of materials with high realism. We validate the model by varying the material properties and performing comparisons with finite element method (FEM) simulations. The integral‐based nature of peridynamics makes it trivial to model material discontinuities, which outweighs differential‐based methods in both accuracy and ease of implementation. We propose a simple strategy to model fracture in the setting of peridynamics discretization. We demonstrate that the fracture criterion combined with our elastoplasticity model could realistically produce ductile fracture as well as brittle fracture. Our work is the first application of peridynamics in graphics that could create a wide range of material phenomena including elasticity, plasticity, and fracture. The complete framework provides an attractive alternative to existing methods for producing modern visual effects.In this paper, we exploit the use of peridynamics theory for graphical animation of material deformation and fracture. We present a new meshless framework for elastoplastic constitutive modelling that contrasts with previous approaches in graphics. Our peridynamics‐based elastoplasticity model represents deformation behaviours of materials with high realism. We validate the model by varying the material properties and performing comparisons with finite element method (FEM) simulations. The integral‐based nature of peridynamics makes it trivial to model material discontinuities, which outweighs differentialbased methods in both accuracy and ease of implementation.Item Frame Rate vs Resolution: A Subjective Evaluation of Spatiotemporal Perceived Quality Under Varying Computational Budgets(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Debattista, K.; Bugeja, K.; Spina, S.; Bashford‐Rogers, T.; Hulusic, V.; Chen, Min and Benes, BedrichMaximizing performance for rendered content requires making compromises on quality parameters depending on the computational resources available . Yet, it is currently unclear which parameters best maximize perceived quality. This work investigates perceived quality across computational budgets for the primary spatiotemporal parameters of resolution and frame rate. Three experiments are conducted. Experiment 1 (n = 26) shows that participants prefer fixed frame rates of 60 frames per second (fps) at lower resolutions over 30 fps at higher resolutions. Experiment 2 (n = 24) explores the relationship further with more budgets and quality settings and again finds 60 fps is generally preferred even when more resources are available. Experiment 3 (n = 25) permits the use of adaptive frame rates, and analyses the resource allocation across seven budgets. Results show that while participants allocate more resources to frame rate at lower budgets the situation reverses once higher budgets are available and a frame rate of around 40 fps is achieved. In the overall, the results demonstrate a complex relationship between frame rate and resolution's effects on perceived quality. This relationship can be harnessed, via the results and models presented, to obtain more cost‐effective virtual experiences.Maximizing performance for rendered content requires making compromises on quality parameters depending on the computational resources available. Yet, it is currently unclear which parameters best maximize perceived quality. This work investigates perceived quality across computational budgets for the primary spatiotemporal parameters of resolution and frame rate. Three experiments are conducted. Experiment 1 (n = 26) shows that participants prefer fixed frame rates of 60 frames per second (fps) at lower resolutions over 30 fps at higher resolutions. Experiment 2 (n = 24) explores the relationship further with more budgets and quality settings and again finds 60 fps is generally preferred even when more resources are available. Experiment 3 (n = 25) permits the use of adaptive frame rates, and analyses the resource allocation across seven budgets. Results show that while participants allocate more resources to frame rate at lower budgets the situation reverses once higher budgets are available and a frame rate of around 40 fps is achieved.Item Easy Generation of Facial Animation Using Motion Graphs(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Serra, J.; Cetinaslan, O.; Ravikumar, S.; Orvalho, V.; Cosker, D.; Chen, Min and Benes, BedrichFacial animation is a time‐consuming and cumbersome task that requires years of experience and/or a complex and expensive set‐up. This becomes an issue, especially when animating the multitude of secondary characters required, e.g. in films or video‐games. We address this problem with a novel technique that relies on motion graphs to represent a landmarked database. Separate graphs are created for different facial regions, allowing a reduced memory footprint compared to the original data. The common poses are identified using a Euclidean‐based similarity metric and merged into the same node. This process traditionally requires a manually chosen threshold, however, we simplify it by optimizing for the desired graph compression. Motion synthesis occurs by traversing the graph using Dijkstra's algorithm, and coherent noise is introduced by swapping some path nodes with their neighbours. Expression labels, extracted from the database, provide the control mechanism for animation. We present a way of creating facial animation with reduced input that automatically controls timing and pose detail. Our technique easily fits within video‐game and crowd animation contexts, allowing the characters to be more expressive with less effort. Furthermore, it provides a starting point for content creators aiming to bring more life into their characters.Facial animation is a time‐consuming and cumbersome task that requires years of experience and/or a complex and expensive set‐up. This becomes an issue, especially when animating the multitude of secondary characters required, e.g. in films or video‐games. We address this problem with a novel technique that relies on motion graphs to represent a landmarked database. Separate graphs are created for different facial regions, allowing a reduced memory footprint compared to the original data. This process traditionally requires a manually chosen threshold, however, we simplify it by optimizing for the desired graph compression. Motion synthesis occurs by traversing the graph, with coherent noise introduced by varying the optimal path that connects the desired nodes. Expression labels, extracted from the database, provide an intuitive control mechanism for animation. Our technique easily fits within video‐game and crowd animation contexts, allowing the characters to be more expressive with less effort.Item 2018 Cover Image: Thingi10K(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Zhou, Qingnan; Jacobson, Alec; Chen, Min and Benes, BedrichItem Human Factors in Streaming Data Analysis: Challenges and Opportunities for Information Visualization(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Dasgupta, Aritra; Arendt, Dustin L.; Franklin, Lyndsey R.; Wong, Pak Chung; Cook, Kristin A.; Chen, Min and Benes, BedrichReal‐world systems change continuously. In domains such as traffic monitoring or cyber security, such changes occur within short time scales. This results in a streaming data problem and leads to unique challenges for the human in the loop, as analysts have to ingest and make sense of dynamic patterns in real time. While visualizations are being increasingly used by analysts to derive insights from streaming data, we lack a thorough characterization of the human‐centred design problems and a critical analysis of the state‐of‐the‐art solutions that exist for addressing these problems. In this paper, our goal is to fill this gap by studying how the state of the art in streaming data visualization handles the challenges and reflect on the gaps and opportunities. To this end, we have three contributions in this paper: (i) problem characterization for identifying domain‐specific goals and challenges for handling streaming data, (ii) a survey and analysis of the state of the art in streaming data visualization research with a focus on how visualization design meets challenges specific to change perception and (iii) reflections on the design trade‐offs, and an outline of potential research directions for addressing the gaps in the state of the art.Real‐world systems change continuously. In domains such as traffic monitoring or cyber security, such changes occur within short time scales. This results in a streaming data problem and leads to unique challenges for the human in the loop, as analysts have to ingest and make sense of dynamic patterns in real time. While visualizations are being increasingly used by analysts to derive insights from streaming data, we lack a thorough characterization of the human‐centred design problems and a critical analysis of the state‐of‐the‐art solutions that exist for addressing these problems.Item A Visualization Framework and User Studies for Overloaded Orthogonal Drawings(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Didimo, Walter; Kornaropoulos, Evgenios M.; Montecchiani, Fabrizio; Tollis, Ioannis G.; Chen, Min and Benes, BedrichOverloaded orthogonal drawing (OOD) is a recent graph visualization style specifically conceived for directed graphs. It merges the advantages of some popular drawing conventions like layered drawings and orthogonal drawings, and provides additional support for some common analysis tasks. We present a visualization framework called , which implements algorithms and graphical features for the OOD style. Besides the algorithm for acyclic digraphs, the DAGView framework implements extensions to visualize both digraphs with cycles and undirected graphs, with the additional possibility of taking into account user preferences and constraints. It also supports an interactive visualization of clustered digraphs, based on the use of strongly connected components. Moreover, we describe an experimental user study, aimed to investigate the usability of OOD within the DAGView framework. The results of our study suggest that OOD can be effectively exploited to perform some basic tasks of analysis in a faster and more accurate way when compared to other drawing styles for directed graphs.Overloaded orthogonal drawing (OOD) is a recent graph visualization style specifically conceived for directed graphs. It merges the advantages of some popular drawing conventions like layered drawings and orthogonal drawings, and provides additional support for some common analysis tasks. We present a visualization framework called , which implements algorithms and graphical features for the OOD style. Besides the algorithm for acyclic digraphs, the DAGView framework implements extensions to visualize both digraphs with cycles and undirected graphs, with the additional possibility of taking into account user preferences and constraints.Item CLUST: Simulating Realistic Crowd Behaviour by Mining Pattern from Crowd Videos(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Zhao, M.; Cai, W.; Turner, S. J.; Chen, Min and Benes, BedrichIn this paper, we present a data‐driven approach to simulate realistic locomotion of virtual pedestrians. We focus on simulating low‐level pedestrians' motion, where a pedestrian's motion is mainly affected by other pedestrians and static obstacles nearby, and the preferred velocities of agents (direction and speed) are obtained from higher level path planning models. Before the simulation, collision avoidance processes (i.e. examples) are extracted from videos to describe how pedestrians avoid collisions, which are then clustered using hierarchical clustering algorithm with a novel distance function to find similar patterns of pedestrians' collision avoidance behaviours. During the simulation, at each time step, the perceived state of each agent is classified into one cluster using a neural network trained before the simulation. A sequence of velocity vectors, representing the agent's future motion, is selected among the examples corresponding to the chosen cluster. The proposed CLUST model is trained and applied to different real‐world datasets to evaluate its generality and effectiveness both qualitatively and quantitatively. The simulation results demonstrate that the proposed model can generate realistic crowd behaviours with comparable computational cost.In this paper, we present a data‐driven approach to simulate realistic locomotion of virtual pedestrians. We focus on simulating low‐level pedestrians' motion, where a pedestrian's motion is mainly affected by other pedestrians and static obstacles nearby, and the preferred velocities of agents (direction and speed) are obtained from higher level path planning models. Before the simulation, collision avoidance processes (i.e. examples) are extracted from videos to describe how pedestrians avoid collisions, which are then clustered using hierarchical clustering algorithm with a novel distance function to find similar patterns of pedestrians' collision avoidance behaviours. During the simulation, at each time step, the perceived state of each agent is classified into one cluster using a neural network trained before the simulation. A sequence of velocity vectors, representing the agent's future motion, is selected among the examples corresponding to the chosen cluster.Item Data Abstraction for Visualizing Large Time Series(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Shurkhovetskyy, G.; Andrienko, N.; Andrienko, G.; Fuchs, G.; Chen, Min and Benes, BedrichNumeric time series is a class of data consisting of chronologically ordered observations represented by numeric values. Much of the data in various domains, such as financial, medical and scientific, are represented in the form of time series. To cope with the increasing sizes of datasets, numerous approaches for abstracting large temporal data are developed in the area of data mining. Many of them proved to be useful for time series visualization. However, despite the existence of numerous surveys on time series mining and visualization, there is no comprehensive classification of the existing methods based on the needs of visualization designers. We propose a classification framework that defines essential criteria for selecting an abstraction method with an eye to subsequent visualization and support of users' analysis tasks. We show that approaches developed in the data mining field are capable of creating representations that are useful for visualizing time series data. We evaluate these methods in terms of the defined criteria and provide a summary table that can be easily used for selecting suitable abstraction methods depending on data properties, desirable form of representation, behaviour features to be studied, required accuracy and level of detail, and the necessity of efficient search and querying. We also indicate directions for possible extension of the proposed classification framework.Numeric time series is a class of data consisting of chronologically ordered observations represented by numeric values. Much of the data in various domains, such as financial, medical and scientific, are represented in the form of time series. To cope with the increasing sizes of datasets, numerous approaches for abstracting large temporal data are developed in the area of data mining. Many of them proved to be useful for time series visualization. However, despite the existence of numerous surveys on time series mining and visualization, there is no comprehensive classification of the existing methods based on the needs of visualization designers. We propose a classification framework that defines essential criteria for selecting an abstraction method with an eye to subsequent visualization and support of users' analysis tasks. We show that approaches developed in the data mining field are capable of creating representations that are useful for visualizing time series data.Item On the Stability of Functional Maps and Shape Difference Operators(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Huang, R.; Chazal, F.; Ovsjanikov, M.; Chen, Min and Benes, BedrichIn this paper, we provide stability guarantees for two frameworks that are based on the notion of functional maps—the framework of shape difference operators and the one of analyzing and visualizing the deformations between shapes. We consider two types of perturbations in our analysis: one is on the input shapes and the other is on the change in . In theory, we formulate and justify the robustness that has been observed in practical implementations of those frameworks. Inspired by our theoretical results, we propose a pipeline for constructing shape difference operators on point clouds and show numerically that the results are robust and informative. In particular, we show that both the shape difference operators and the derived areas of highest distortion are stable with respect to changes in shape representation and change of scale. Remarkably, this is in contrast with the well‐known instability of the eigenfunctions of the Laplace–Beltrami operator computed on point clouds compared to those obtained on triangle meshes.In this paper, we provide stability guarantees for two frameworks that are based on the notion of functional maps—the shape difference operators introduced in [ROA*13] and the framework of [OBCCG13] which is used to analyse and visualize the deformations between shapes induced by a functional map. We consider two types of perturbations in our analysis: one is on the input shapes and the other is on the change in . In theory, we formulate and justify the robustness that has been observed in practical implementations of those frameworks.Item Tree Growth Modelling Constrained by Growth Equations(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Yi, Lei; Li, Hongjun; Guo, Jianwei; Deussen, Oliver; Zhang, Xiaopeng; Chen, Min and Benes, BedrichModelling and simulation of tree growth that is faithful to the living environment and numerically consistent to botanic knowledge are important topics for realistic modelling in computer graphics. The realism factors concerned include the effects of complex environment on tree growth and the reliability of the simulation in botanical research, such as horticulture and agriculture. This paper proposes a new approach, namely, integrated growth modelling, to model virtual trees and simulate their growth by enforcing constraints of environmental resources and tree morphological properties. Morphological properties are integrated into a growth equation with different parameters specified in the simulation, including its sensitivity to light, allocation and usage of received resources and effects on its environment. The growth equation guarantees that the simulation procedure numerically matches the natural growth phenomenon of trees. With this technique, the growth procedures of diverse and realistic trees can also be modelled in different environments, such as resource competition among multiple trees.Modelling and simulation of tree growth that is faithful to the living environment and numerically consistent to botanic knowledge are important topics for realistic modelling in computer graphics. The realism factors concerned include the effects of complex environment on tree growth and the reliability of the simulation in botanical research, such as horticulture and agriculture. This paper proposes a new approach, namely, integrated growth modelling, to model virtual trees and simulate their growth by enforcing constraints of environmental resources and tree morphological properties. Morphological properties are integrated into a growth equation with different parameters specified in the simulation, including its sensitivity to light, allocation and usage of received resources and effects on its environment. The growth equation guarantees that the simulation procedure numerically matches the natural growth phenomenon of trees.Item CorrelatedMultiples: Spatially Coherent Small Multiples With Constrained Multi‐Dimensional Scaling(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Liu, Xiaotong; Hu, Yifan; North, Stephen; Shen, Han‐Wei; Chen, Min and Benes, BedrichDisplaying small multiples is a popular method for visually summarizing and comparing multiple facets of a complex data set. If the correlations between the data are not considered when displaying the multiples, searching and comparing specific items become more difficult since a sequential scan of the display is often required. To address this issue, we introduce CorrelatedMultiples, a spatially coherent visualization based on small multiples, where the items are placed so that the distances reflect their dissimilarities. We propose a constrained multi‐dimensional scaling (CMDS) solver that preserves spatial proximity while forcing the items to remain within a fixed region. We evaluate the effectiveness of our approach by comparing CMDS with other competing methods through a controlled user study and a quantitative study, and demonstrate the usefulness of CorrelatedMultiples for visual search and comparison in three real‐world case studies.Item Issue Information(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Chen, Min and Benes, BedrichItem Improved Corners with Multi‐Channel Signed Distance Fields(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Chlumský, V.; Sloup, J.; Šimeček, I.; Chen, Min and Benes, BedrichWe propose an extension to the state‐of‐the‐art text rendering technique based on sampling a 2D signed distance field from a texture. This extension significantly improves the visual quality of sharp corners, which is the most problematic feature to reproduce for the original technique. We achieve this by using a combination of multiple distance fields in conjunction, which together provide a more thorough representation of the given glyph's (or any other 2D shape's) geometry. This multi‐channel distance field representation is described along with its application in shader‐based rendering. The rendering process itself remains very simple and efficient, and is fully compatible with previous monochrome distance fields. The introduced method of multi‐channel distance field construction requires a vector representation of the input shape. A comparative measurement of rendering quality shows that the error in the output image can be reduced by up to several orders of magnitude.We propose an extension to the state‐of‐the‐art text rendering technique based on sampling a 2D signed distance field from a texture. This extension significantly improves the visual quality of sharp corners, which is the most problematic feature to reproduce for the original technique. We achieve this by using a combination of multiple distance fields in conjunction, which together provide a more thorough representation of the given glyph's (or any other 2D shape's) geometry. This multi‐channel distance field representation is described along with its application in shader‐based rendering. The rendering process itself remains very simple and efficient, and is fully compatible with previous monochrome distance fields.