36-Issue 1
Permanent URI for this collection
Browse
Browsing 36-Issue 1 by Title
Now showing 1 - 20 of 24
Results Per Page
Sort Options
Item 2017 Cover Image: Mixing Bowl(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Marra, Alessia; Nitti, Maurizio; Papas, Marios; Müller, Thomas; Gross, Markus; Jarosz, Wojciech; ovák, Jan; Chen, Min and Zhang, Hao (Richard)Item Accurate and Efficient Computation of Laplacian Spectral Distances and Kernels(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Patané, Giuseppe; Chen, Min and Zhang, Hao (Richard)This paper introduces the Laplacian spectral distances, as a function that resembles the usual distance map, but exhibits properties (e.g. smoothness, locality, invariance to shape transformations) that make them useful to processing and analysing geometric data. Spectral distances are easily defined through a filtering of the Laplacian eigenpairs and reduce to the heat diffusion, wave, biharmonic and commute‐time distances for specific filters. In particular, the smoothness of the spectral distances and the encoding of local and global shape properties depend on the convergence of the filtered eigenvalues to zero. Instead of applying a truncated spectral approximation or prolongation operators, we propose a computation of Laplacian distances and kernels through the solution of sparse linear systems. Our approach is free of user‐defined parameters, overcomes the evaluation of the Laplacian spectrum and guarantees a higher approximation accuracy than previous work.Item Consistent Partial Matching of Shape Collections via Sparse Modeling(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Cosmo, L.; Rodolà, E.; Albarelli, A.; Mémoli, F.; Cremers, D.; Chen, Min and Zhang, Hao (Richard)Recent efforts in the area of joint object matching approach the problem by taking as input a set of pairwise maps, which are then jointly optimized across the whole collection so that certain accuracy and consistency criteria are satisfied. One natural requirement is cycle‐consistency—namely the fact that map composition should give the same result regardless of the path taken in the shape collection. In this paper, we introduce a novel approach to obtain consistent matches without requiring initial pairwise solutions to be given as input. We do so by optimizing a joint measure of metric distortion directly over the space of cycle‐consistent maps; in order to allow for partially similar and extra‐class shapes, we formulate the problem as a series of quadratic programs with sparsity‐inducing constraints, making our technique a natural candidate for analysing collections with a large presence of outliers. The particular form of the problem allows us to leverage results and tools from the field of evolutionary game theory. This enables a highly efficient optimization procedure which assures accurate and provably consistent solutions in a matter of minutes in collections with hundreds of shapes.Recent efforts in the area of joint object matching approach the problem by taking as input a set of pairwise maps, which are then jointly optimized across the whole collection so that certain accuracy and consistency criteria are satisfied. One natural requirement is cycleconsistency— namely the fact that map composition should give the same result regardless of the path taken in the shape collection. In this paper, we introduce a novel approach to obtain among partially similar shapes consistent matches without requiring initial pairwise solutions to be given as input.Item Constrained Convex Space Partition for Ray Tracing in Architectural Environments(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Maria, M.; Horna, S.; Aveneau, L.; Chen, Min and Zhang, Hao (Richard)This paper explores constrained convex space partition (CCSP) as a new acceleration structure for ray tracing. A CCSP is a graph, representing a space partition made up of empty convex volumes. The scene geometry is located on the boundary of the convex volumes. Therefore, each empty volume is bounded with two kinds of faces: occlusive ones (belonging to the scene geometry), and non‐occlusive ones. Given a ray, ray casting is performed by traversing the CCSP one volume at a time, until it hits the scene geometry. In this paper, this idea is applied to architectural scenes. We show that CCSP allows to cast several hundreds of millions of rays per second, even if they are not spatially coherent. Experiments are performed for large furnished buildings made up of hundreds of millions of polygons and containing thousands of light sources.This paper explores constrained convex space partition (CCSP) as a new acceleration structure for ray tracing. A CCSP is a graph, representing a space partition made up of empty convex volumes. The scene geometry is located on the boundary of the convex volumes. Therefore, each empty volume is bounded with two kinds of faces: occlusive ones (belonging to the scene geometry), and non‐occlusive ones. Given a ray, ray casting is performed by traversing the CCSP one volume at a time, until it hits the scene geometry. In this paper, this idea is applied to architectural scenes.We show that CCSP allows to cast several hundreds of millions of rays per second, even if they are not spatially coherent. Experiments are performed for large furnished buildings made up of hundreds of millions of polygons and containing thousands of light sources.Item Constructive Visual Analytics for Text Similarity Detection(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Abdul-Rahman, A.; Roe, G.; Olsen, M.; Gladstone, C.; Whaling, R.; Cronk, N.; Morrissey, R.; Chen, M.; Chen, Min and Zhang, Hao (Richard)Detecting similarity between texts is a frequently encountered text mining task. Because the measurement of similarity is typically composed of a number of metrics, and some measures are sensitive to subjective interpretation, a generic detector obtained using machine learning often has difficulties balancing the roles of different metrics according to the semantic context exhibited in a specific collection of texts. In order to facilitate human interaction in a visual analytics process for text similarity detection, we first map the problem of pairwise sequence comparison to that of image processing, allowing patterns of similarity to be visualized as a 2D pixelmap. We then devise a visual interface to enable users to construct and experiment with different detectors using primitive metrics, in a way similar to constructing an image processing pipeline. We deployed this new approach for the identification of commonplaces in 18th‐century literary and print culture. Domain experts were then able to make use of the prototype system to derive new scholarly discoveries and generate new hypotheses.Detecting similarity between texts is a frequently encountered text mining task. Because the measurement of similarity is typically composed of a number of metrics, and some measures are sensitive to subjective interpretation, a generic detector obtained using machine learning often has difficulties balancing the roles of different metrics according to the semantic context exhibited in a specific collection of texts. In order to facilitate human interaction in a visual analytics process for text similarity detection, we first map the problem of pairwise sequence comparison to that of image processing, allowing patterns of similarity to be visualized as a 2D pixelmap.We then devise a visual interface to enable users to construct and experiment with different detectors using primitive metrics, in a way similar to constructing an image processing pipeline. We deployed this new approach for the identification of commonplaces in 18th‐century literary and print culture. Domain experts were then able to make use of the prototype system to derive new scholarly discoveries and generate new hypotheses.Item Data‐Driven Shape Analysis and Processing(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Xu, Kai; Kim, Vladimir G.; Huang, Qixing; Kalogerakis, Evangelos; Chen, Min and Zhang, Hao (Richard)Data‐driven methods serve an increasingly important role in discovering geometric, structural and semantic relationships between shapes. In contrast to traditional approaches that process shapes in isolation of each other, data‐driven methods aggregate information from 3D model collections to improve the analysis, modelling and editing of shapes. Data‐driven methods are also able to learn computational models that reason about properties and relationships of shapes without relying on hard‐coded rules or explicitly programmed instructions. Through reviewing the literature, we provide an overview of the main concepts and components of these methods, as well as discuss their application to classification, segmentation, matching, reconstruction, modelling and exploration, as well as scene analysis and synthesis. We conclude our report with ideas that can inspire future research in data‐driven shape analysis and processing.Data‐driven methods serve an increasingly important role in discovering geometric, structural and semantic relationships between shapes. In contrast to traditional approaches that process shapes in isolation of each other, data‐driven methods aggregate information from 3D model collections to improve the analysis, modelling and editing of shapes. Data‐driven methods are also able to learn computational models that reason about properties and relationships of shapes without relying on hard‐coded rules or explicitly programmed instructions. Through reviewing the literature, we provide an overview of the main concepts and components of these methods, as well as discuss their application to classification, segmentation, matching, reconstruction, modelling and exploration, as well as scene analysis and synthesis. We conclude our report with ideas that can inspire future research in data‐driven shape analysis and processing.Item Digital Fabrication Techniques for Cultural Heritage: A Survey(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Scopigno, R.; Cignoni, P.; Pietroni, N.; Callieri, M.; Dellepiane, M.; Chen, Min and Zhang, Hao (Richard)Digital fabrication devices exploit basic technologies in order to create tangible reproductions of 3D digital models. Although current 3D printing pipelines still suffer from several restrictions, accuracy in reproduction has reached an excellent level. The manufacturing industry has been the main domain of 3D printing applications over the last decade. Digital fabrication techniques have also been demonstrated to be effective in many other contexts, including the consumer domain. The Cultural Heritage is one of the new application contexts and is an ideal domain to test the flexibility and quality of this new technology. This survey overviews the various fabrication technologies, discussing their strengths, limitations and costs. Various successful uses of 3D printing in the Cultural Heritage are analysed, which should also be useful for other application contexts. We review works that have attempted to extend fabrication technologies in order to deal with the specific issues in the use of digital fabrication in the Cultural Heritage. Finally, we also propose areas for future research.Digital fabrication devices exploit basic technologies in order to create tangible reproductions of 3D digital models. Although current 3D printing pipelines still suffer from several restrictions, accuracy in reproduction has reached an excellent level. The manufacturing industry has been themain domain of 3D printing applications over the last decade.Digital fabrication techniques have also been demonstrated to be effective in many other contexts, including the consumer domain. The Cultural Heritage is one of the new application contexts and is an ideal domain to test the flexibility and quality of this new technology.Item Discovering Structured Variations Via Template Matching(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Ceylan, Duygu; Dang, Minh; Mitra, Niloy J.; Neubert, Boris; Pauly, Mark; Chen, Min and Zhang, Hao (Richard)Understanding patterns of variation from raw measurement data remains a central goal of shape analysis. Such an understanding reveals which elements are repeated, or how elements can be derived as structured variations from a common base element. We investigate this problem in the context of 3D acquisitions of buildings. Utilizing a set of template models, we discover geometric similarities across a set of building elements. Each template is equipped with a deformation model that defines variations of a base geometry. Central to our algorithm is a simultaneous template matching and deformation analysis that detects patterns across building elements by extracting similarities in the deformation modes of their matching templates. We demonstrate that such an analysis can successfully detect structured variations even for noisy and incomplete data. Understanding patterns of variation from raw measurement data remains a central goal of shape analysis. Such an understanding reveals which elements are repeated, or how elements can be derived as structured variations from a common base element. We investigate this problem in the context of 3D acquisitions of buildings. Utilizing a set of template models, we discover geometric similarities across a set of building elements. Each template is equipped with a deformation model that defines variations of a base geometry.Item Editorial(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Chen, Min; Zhang, Hao (Richard); Chen, Min and Zhang, Hao (Richard)Item Graphs in Scientific Visualization: A Survey(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Wang, Chaoli; Tao, Jun; Chen, Min and Zhang, Hao (Richard)Graphs represent general node‐link diagrams and have long been utilized in scientific visualization for data organization and management. However, using graphs as a visual representation and interface for navigating and exploring scientific data sets has a much shorter history, yet the amount of work along this direction is clearly on the rise in recent years. In this paper, we take a holistic perspective and survey graph‐based representations and techniques for scientific visualization. Specifically, we classify these representations and techniques into four categories, namely partition‐wise, relationship‐wise, structure‐wise and provenance‐wise. We survey related publications in each category, explaining the roles of graphs in related work and highlighting their similarities and differences. At the end, we reexamine these related publications following the graph‐based visualization pipeline. We also point out research trends and remaining challenges in graph‐based representations and techniques for scientific visualization.Graphs represent general node‐link diagrams and have long been utilized in scientific visualization for data organization and management. However, using graphs as a visual representation and interface for navigating and exploring scientific data sets has a much shorter history, yet the amount of work along this direction is clearly on the rise in recent years. In this paper, we take a holistic perspective and survey graph‐based representations and techniques for scientific visualization.Item Inversion Fractals and Iteration Processes in the Generation of Aesthetic Patterns(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Gdawiec, K.; Chen, Min and Zhang, Hao (Richard)In this paper, we generalize the idea of star‐shaped set inversion fractals using iterations known from fixed point theory. We also extend the iterations from real parameters to so‐called ‐system numbers and proposed the use of switching processes. All the proposed generalizations allowed us to obtain new and diverse fractal patterns that can be used, e.g. as textile and ceramics patterns. Moreover, we show that in the chaos game for iterated function systems—which is similar to the inversion fractals generation algorithm—the proposed generalizations do not give interesting results.In this paper, we generalize the idea of star‐shaped set inversion fractals using iterations known from fixed point theory. We also extend the iterations from real parameters to so‐called ‐system numbers and proposed the use of switching processes. All the proposed generalizations allowed us to obtain new and diverse fractal patterns that can be used, e.g. as textile and ceramics patterns. Moreover, we show that in the chaos game for iterated function systems—which is similar to the inversion fractals generation algorithm—the proposed generalizations do not give interesting results.Item Issue Information(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Chen, Min and Zhang, Hao (Richard)Item Multi-Modal Perception for Selective Rendering(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Harvey, Carlo; Debattista, Kurt; Bashford-Rogers, Thomas; Chalmers, Alan; Chen, Min and Zhang, Hao (Richard)A major challenge in generating high‐fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high‐fidelity simulation of light and sound is still unachievable in real time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high‐fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialized directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilized in selective rendering pipelines via the use of multi‐modal maps. The multi‐modal maps are tested through psychophysical experiments to examine their applicability to selective rendering algorithms, with a series of fixed cost rendering functions, and are found to perform significantly better than only using image saliency maps that are naively applied to multi‐modal VEs.A major challenge in generating high‐fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high‐fidelity simulation of light and sound is still unachievable in real time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high‐fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialized directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilized in selective rendering pipelines via the use of multi‐modal maps.Item Output-Sensitive Filtering of Streaming Volume Data(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Solteszova, Veronika; Birkeland, Åsmund; Stoppel, Sergej; Viola, Ivan; Bruckner, Stefan; Chen, Min and Zhang, Hao (Richard)Real‐time volume data acquisition poses substantial challenges for the traditional visualization pipeline where data enhancement is typically seen as a pre‐processing step. In the case of 4D ultrasound data, for instance, costly processing operations to reduce noise and to remove artefacts need to be executed for every frame. To enable the use of high‐quality filtering operations in such scenarios, we propose an output‐sensitive approach to the visualization of streaming volume data. Our method evaluates the potential contribution of all voxels to the final image, allowing us to skip expensive processing operations that have little or no effect on the visualization. As filtering operations modify the data values which may affect the visibility, our main contribution is a fast scheme to predict their maximum effect on the final image. Our approach prioritizes filtering of voxels with high contribution to the final visualization based on a maximal permissible error per pixel. With zero permissible error, the optimized filtering will yield a result that is identical to filtering of the entire volume. We provide a thorough technical evaluation of the approach and demonstrate it on several typical scenarios that require on‐the‐fly processing.Real‐time volume data acquisition poses substantial challenges for the traditional visualization pipeline where data enhancement is typically seen as a pre‐processing step. In the case of 4D ultrasound data, for instance, costly processing operations to reduce noise and to remove artefacts need to be executed for every frame. To enable the use of high‐quality filtering operations in such scenarios, we propose an outputsensitive approach to the visualization of streaming volume data. Our method evaluates the potential contribution of all voxels to the final image, allowing us to skip expensive processing operations that have little or no effect on the visualization As filtering operations modify the data values which may affect the visibility, our main contribution is a fast scheme to predict their maximum effect on the final image. Our approach prioritizes filtering of voxels with high contribution to the final visualization based on a maximal permissible error per pixel. With zero permissible error, the optimized filtering will yield a result that is identical to filtering of the entire volume. We provide a thorough technical evaluation of the approach and demonstrate it on several typical scenarios that require on‐the‐fly processing.Item Partial Functional Correspondence(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Rodolà, E.; Cosmo, L.; Bronstein, M. M.; Torsello, A.; Cremers, D.; Chen, Min and Zhang, Hao (Richard)In this paper, we propose a method for computing partial functional correspondence between non‐rigid shapes. We use perturbation analysis to show how removal of shape parts changes the Laplace–Beltrami eigenfunctions, and exploit it as a prior on the spectral representation of the correspondence. Corresponding parts are optimization variables in our problem and are used to weight the functional correspondence; we are looking for the largest and most regular (in the Mumford–Shah sense) parts that minimize correspondence distortion. We show that our approach can cope with very challenging correspondence settings.In this paper, we propose a method for computing partial functional correspondence between non‐rigid shapes. We use perturbation analysis to show how removal of shape parts changes the Laplace‐Beltrami eigenfunctions, and exploit it as a prior on the spectral representation of the correspondence. Corresponding parts are optimization variables in our problem and are used to weight the functional correspondence; we are looking for the largest and most regular (in the Mumford‐Shah sense) parts that minimize correspondence distortion. We show that our approach can cope with very challenging correspondence settings.Item Predicting Visual Perception of Material Structure in Virtual Environments(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Filip, J.; Vávra, R.; Havlíček, M.; Krupička, M.; Chen, Min and Zhang, Hao (Richard)One of the most accurate yet still practical representation of material appearance is the Bidirectional Texture Function (BTF). The BTF can be viewed as an extension of Bidirectional Reflectance Distribution Function (BRDF) for additional spatial information that includes local visual effects such as shadowing, interreflection, subsurface‐scattering, etc. However, the shift from BRDF to BTF represents not only a huge leap in respect to the realism of material reproduction, but also related high memory and computational costs stemming from the storage and processing of massive BTF data. In this work, we argue that each opaque material, regardless of its surface structure, can be safely substituted by a BRDF without the introduction of a significant perceptual error when viewed from an appropriate distance. Therefore, we ran a set of psychophysical studies over 25 materials to determine so‐called critical viewing distances, i.e. the minimal distances at which the material spatial structure (texture) cannot be visually discerned. Our analysis determined such typical distances typical for several material categories often used in interior design applications. Furthermore, we propose a combination of computational features that can predict such distances without the need for a psychophysical study. We show that our work can significantly reduce rendering costs in applications that process complex virtual scenes.One of the most accurate yet still practical representation of material appearance is the Bidirectional Texture Function (BTF). The BTF can be viewed as an extension of Bidirectional Reflectance Distribution Function (BRDF) for additional spatial information that includes local visual effects such as shadowing, interreflection, subsurface‐scattering, etc. However, the shift from BRDF to BTF represents not only a huge leap in respect to the realism of material reproduction, but also related high memory and computational costs stemming from the storage and processing of massive BTF data. In this work, we argue that each opaque material, regardless of its surface structure, can be safely substituted by a BRDF without the introduction of a significant perceptual error when viewed from an appropriate distance. Therefore, we ran a set of psychophysical studies over 25 materials to determine so‐called critical viewing distances, i.e. the minimal distances at which the material spatial structure (texture) cannot be visually discerned. Our analysis determined such typical distances typical for several material categories often used in interior design applications.Item Sparse GPU Voxelization of Yarn‐Level Cloth(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Lopez‐Moreno, Jorge; Miraut, David; Cirio, Gabriel; Otaduy, Miguel A.; Chen, Min and Zhang, Hao (Richard)Most popular methods in cloth rendering rely on volumetric data in order to model complex optical phenomena such as sub‐surface scattering. These approaches are able to produce very realistic illumination results, but their volumetric representations are costly to compute and render, forfeiting any interactive feedback. In this paper, we introduce a method based on the Graphics Processing Unit (GPU) for voxelization and visualization, suitable for both interactive and offline rendering. Recent features in the OpenGL model, like the ability to dynamically address arbitrary buffers and allocate bindless textures, are combined into our pipeline to interactively voxelize millions of polygons into a set of large three‐dimensional (3D) textures (>10 elements), generating a volume with sub‐voxel accuracy, which is suitable even for high‐density woven cloth such as linen.Most popular methods in cloth rendering rely on volumetric data in order to model complex optical phenomena such as sub‐surface scattering. These approaches are able to produce very realistic illumination results, but their volumetric representations are costly to compute and render, forfeiting any interactive feedback. In this paper, we introduce a method based on the GPU for voxelization and visualization, suitable for both interactive and offline rendering. Recent features in the OpenGL model, like the ability to dynamically address arbitrary buffers and allocate bindless textures, are combined into our pipeline to interactively voxelize millions of polygons into a set of large three‐dimensional (3D) textures (>10 elements), generating a volume with sub‐voxel accuracy, which is suitable even for high‐density woven cloth such as linen.Item A Survey of Surface Reconstruction from Point Clouds(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Berger, Matthew; Tagliasacchi, Andrea; Seversky, Lee M.; Alliez, Pierre; Guennebaud, Gaël; Levine, Joshua A.; Sharf, Andrei; Silva, Claudio T.; Chen, Min and Zhang, Hao (Richard)The area of surface reconstruction has seen substantial progress in the past two decades. The traditional problem addressed by surface reconstruction is to recover the digital representation of a physical shape that has been scanned, where the scanned data contain a wide variety of defects. While much of the earlier work has been focused on reconstructing a piece‐wise smooth representation of the original shape, recent work has taken on more specialized priors to address significantly challenging data imperfections, where the reconstruction can take on different representations—not necessarily the explicit geometry. We survey the field of surface reconstruction, and provide a categorization with respect to priors, data imperfections and reconstruction output. By considering a holistic view of surface reconstruction, we show a detailed characterization of the field, highlight similarities between diverse reconstruction techniques and provide directions for future work in surface reconstruction.The area of surface reconstruction has seen substantial progress in the past two decades. The traditional problem addressed by surface reconstruction is to recover the digital representation of a physical shape that has been scanned, where the scanned data contain a wide variety of defects. While much of the earlier work has been focused on reconstructing a piece‐wise smooth representation of the original shape, recent work has taken on more specialized priors to address significantly challenging data imperfections, where the reconstruction can take on different representations—not necessarily the explicit geometryItem A Survey of Visualization for Live Cell Imaging(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Pretorius, A. J.; Khan, I. A.; Errington, R. J.; Chen, Min and Zhang, Hao (Richard)Live cell imaging is an important biomedical research paradigm for studying dynamic cellular behaviour. Although phenotypic data derived from images are difficult to explore and analyse, some researchers have successfully addressed this with visualization. Nonetheless, visualization methods for live cell imaging data have been reported in an ad hoc and fragmented fashion. This leads to a knowledge gap where it is difficult for biologists and visualization developers to evaluate the advantages and disadvantages of different visualization methods, and for visualization researchers to gain an overview of existing work to identify research priorities. To address this gap, we survey existing visualization methods for live cell imaging from a visualization research perspective for the first time. Based on recent visualization theory, we perform a structured qualitative analysis of visualization methods that includes characterizing the domain and data, abstracting tasks, and describing visual encoding and interaction design. Based on our survey, we identify and discuss research gaps that future work should address: the broad analytical context of live cell imaging; the importance of behavioural comparisons; links with dynamic data visualization; the consequences of different data modalities; shortcomings in interactive support; and, in addition to analysis, the value of the presentation of phenotypic data and insights to other stakeholders.Live cell imaging is an important biomedical research paradigm for studying dynamic cellular behaviour. Although phenotypic data derived from images are difficult to explore and analyse, some researchers have successfully addressed this with visualization. Nonetheless, visualization methods for live cell imaging data have been reported in an ad hoc and fragmented fashion. This leads to a knowledge gap where it is difficult for biologists and visualization developers to evaluate the advantages and disadvantages of different visualization methods, and for visualization researchers to gain an overview of existing work to identify research priorities. To address this gap, we survey existing visualization methods for live cell imaging from a visualization research perspective for the first time.Item Synthesis of Human Skin Pigmentation Disorders(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Barros, R. S.; Walter, M.; Chen, Min and Zhang, Hao (Richard)Changes in the human pigmentary system can lead to imbalances in the distribution of melanin in the skin resulting in artefacts known as pigmented lesions. Our work takes as departing point biological data regarding human skin, the pigmentary system and the melanocytes life cycle and presents a reaction–diffusion model for the simulation of the shape features of human‐pigmented lesions. The simulation of such disorders has many applications in dermatology, for instance, to assist dermatologists in diagnosis and training related to pigmentation disorders. Our study focuses, however, on applications related to computer graphics. Thus, we also present a method to seamless blend the results of our simulation model in images of healthy human skin. In this context, our model contributes to the generation of more realistic skin textures and therefore more realistic human models. In order to assess the quality of our results, we measured and compared the characteristics of the shape of real and synthesized pigmented lesions. We show that synthesized and real lesions have no statistically significant differences in their shape features. Visually, our results also compare favourably with images of real lesions, being virtually indistinguishable from real images.Changes in the human pigmentary system can lead to imbalances in the distribution of melanin in the skin resulting in artefacts known as pigmented lesions. Our work takes as departing point biological data regarding human skin, the pigmentary system and the melanocytes life cycle and presents a reaction‐diffusion model for the simulation of the shape features of human‐pigmented lesions. The simulation of such disorders has many applications in dermatology, for instance, to assist dermatologists in diagnosis and training related to pigmentation disorders. Our study focuses, however, on applications related to computer graphics. Thus, we also present a method to seamless blend the results of our simulation model in images of healthy human skin.