Volume 30 (2011)
Permanent URI for this community
Browse
Browsing Volume 30 (2011) by Issue Date
Now showing 1 - 20 of 236
Results Per Page
Sort Options
Item Computational Plenoptic Imaging(The Eurographics Association and Blackwell Publishing Ltd., 2011) Wetzstein, Gordon; Ihrke, Ivo; Lanman, Douglas; Heidrich, Wolfgang; Eduard Groeller and Holly RushmeierThe plenoptic function is a ray‐based model for light that includes the colour spectrum as well as spatial, temporal and directional variation. Although digital light sensors have greatly evolved in the last years, one fundamental limitation remains: all standard CCD and CMOS sensors integrate over the dimensions of the plenoptic function as they convert photons into electrons; in the process, all visual information is irreversibly lost, except for a two‐dimensional, spatially varying subset—the common photograph. In this state‐of‐the‐art report, we review approaches that optically encode the dimensions of the plenoptic function transcending those captured by traditional photography and reconstruct the recorded information computationally.Item Skeleton Computation of Orthogonal Polyhedra(The Eurographics Association and Blackwell Publishing Ltd., 2011) Martinez, Jonas; Vigo, Marc; Pla-Garcia, Nuria; Mario Botsch and Scott SchaeferSkeletons are powerful geometric abstractions that provide useful representations for a number of geometric operations. The straight skeleton has a lower combinatorial complexity compared with the medial axis. Moreover, while the medial axis of a polyhedron is composed of quadric surfaces the straight skeleton just consist of planar faces. Although there exist several methods to compute the straight skeleton of a polygon, the straight skeleton of polyhedra has been paid much less attention. We require to compute the skeleton of very large datasets storing orthogonal polyhedra. Furthermore, we need to treat geometric degeneracies that usually arise when dealing with orthogonal polyhedra. We present a new approach so as to robustly compute the straight skeleton of orthogonal polyhedra. We follow a geometric technique that works directly with the boundary of an orthogonal polyhedron. Our approach is output sensitive with respect to the number of vertices of the skeleton and solves geometric degeneracies. Unlike the existing straight skeleton algorithms that shrink the object boundary to obtain the skeleton, our algorithm relies on the plane sweep paradigm. The resulting skeleton is only composed of axis-aligned and 45 rotated planar faces and edges.Item Symmetry Hierarchy of Man-Made Objects(The Eurographics Association and Blackwell Publishing Ltd., 2011) Wang, Yanzhen; Xu, Kai; Li, Jun; Zhang, Hao; Shamir, Ariel; Liu, Ligang; Cheng, Zhi-Quan; Xiong, Y.; M. Chen and O. DeussenWe introduce symmetry hierarchy of man-made objects, a high-level structural representation of a 3D model providing a symmetry-induced, hierarchical organization of the model's constituent parts. Given an input mesh, we segment it into primitive parts and build an initial graph which encodes inter-part symmetries and connectivity relations, as well as self-symmetries in individual parts. The symmetry hierarchy is constructed from the initial graph via recursive graph contraction which either groups parts by symmetry or assembles connected sets of parts. The order of graph contraction is dictated by a set of precedence rules designed primarily to respect the law of symmetry in perceptual grouping and the principle of compactness of representation. We show that symmetry hierarchy naturally implies a hierarchical segmentation that is more meaningful than those produced by local geometric considerations. We also develop an application of symmetry hierarchies for structural shape editing.Item VASE: Volume-Aware Surface Evolution for Surface Reconstruction from Incomplete Point Clouds(The Eurographics Association and Blackwell Publishing Ltd., 2011) Tagliasacchi, Andrea; Olson, Matt; Zhang, Hao; Hamarneh, Ghassan; Cohen-Or, Daniel; Mario Botsch and Scott SchaeferObjects with many concavities are difficult to acquire using laser scanners. The highly concave areas are hard to access by a scanner due to occlusions by other components of the object. The resulting point scan typically suffers from large amounts of missing data. Methods that use surface-based priors rely on local surface estimates and perform well only when filling small holes. When the holes become large, the reconstruction problem becomes severely under-constrained, which necessitates the use of additional reconstruction priors. In this paper, we introduce weak volumetric priors which assume that the volume of a shape varies smoothly and that each point cloud sample is visible from outside the shape. Specifically, the union of view-rays given by the scanner implicitly carves the exterior volume, while volumetric smoothness regularizes the internal volume. We incorporate these priors into a surface evolution framework where a new energy term defined by volumetric smoothness is introduced to handle large amount of missing data. We demonstrate the effectiveness of our method on objects exhibiting deep concavities, and show its general applicability over a broader spectrum of geometric scenario.Item Improved Model‐ and View‐Dependent Pruning of Large Botanical Scenes(The Eurographics Association and Blackwell Publishing Ltd., 2011) Neubert, B.; Pirk, S.; Deussen, O.; Dachsbacher, C.; Eduard Groeller and Holly RushmeierWe present an optimized pruning algorithm that allows for considerable geometry reduction in large botanical scenes while maintaining high and coherent rendering quality. We improve upon previous techniques by applying model‐specific geometry reduction functions and optimized scaling functions. For this we introduce the use of Precision and Recall (PR) as a measure of quality to rendering and show how PR‐scores can be used to predict better scaling values. We conducted a user‐study letting subjects adjust the scaling value, which shows that the predicted scaling matches the preferred ones. Finally, we extend the originally purely stochastic geometry prioritization for pruning to account for view‐optimized geometry selection, which allows to take global scene information, such as occlusion, into consideration. We demonstrate our method for the rendering of scenes with thousands of complex tree models in real‐time.Item Prior Knowledge for Part Correspondence(The Eurographics Association and Blackwell Publishing Ltd., 2011) Kaick, Oliver van; Tagliasacchi, Andrea; Sidi, Oana; Zhang, Hao; Cohen-Or, Daniel; Wolf, Lior; Hamarneh, Ghassan; M. Chen and O. DeussenClassical approaches to shape correspondence base their computation purely on the properties, in particular geometric similarity, of the shapes in question. Their performance still falls far short of that of humans in challenging cases where corresponding shape parts may differ significantly in geometry or even topology. We stipulate that in these cases, shape correspondence by humans involves recognition of the shape parts where prior knowledge on the parts would play a more dominant role than geometric similarity. We introduce an approach to part correspondence which incorporates prior knowledge imparted by a training set of pre-segmented, labeled models and combines the knowledge with content-driven analysis based on geometric similarity between the matched shapes. First, the prior knowledge is learned from the training set in the form of per-label classifiers. Next, given two query shapes to be matched, we apply the classifiers to assign a probabilistic label to each shape face. Finally, by means of a joint labeling scheme, the probabilistic labels are used synergistically with pairwise assignments derived from geometric similarity to provide the resulting part correspondence. We show that the incorporation of knowledge is especially effective in dealing with shapes exhibiting large intra-class variations. We also show that combining knowledge and content analyses outperforms approaches guided by either attribute alone.Item CheckViz: Sanity Check and Topological Clues for Linear and Non-Linear Mappings(The Eurographics Association and Blackwell Publishing Ltd., 2011) Lespinats, Sylvain; Aupetit, Michaël; Eduard Groeller and Holly RushmeierMultidimensional scaling is a must-have tool for visual data miners, projecting multidimensional data onto a two-dimensional plane. However, what we see is not necessarily what we think about. In many cases, end-users do not take care of scaling the projection space with respect to the multidimensional space. Anyway, when using non-linear mappings, scaling is not even possible. Yet, without scaling geometrical structures which might appear do not make more sense than considering a random map. Without scaling, we shall not make inference from the display back to the multidimensional space. No clusters, no trends, no outliers, there is nothing to infer without first quantifying the mapping quality. Several methods to qualify mappings have been devised. Here, we propose CheckViz, a new method belonging to the framework of Verity Visualization. We define a two-dimensional perceptually uniform colour coding which allows visualizing tears and false neighbourhoods, the two elementary and complementary types of geometrical mapping distortions, straight onto the map at the location where they occur. As examples shall demonstrate, this visualization method is essential to help users make sense out of the mappings and to prevent them from over interpretations. It could be applied to check other mappings as well.Item Authoring Hierarchical Road Networks(The Eurographics Association and Blackwell Publishing Ltd., 2011) Galin, Eric; Peytavie, Adrien; Guérin, Eric; Benes, Bedrich; Bing-Yu Chen, Jan Kautz, Tong-Yee Lee, and Ming C. LinWe present a procedural method for generating hierarchical road networks connecting cities, towns and villages over large terrains. Our approach relies on an original geometric graph generation algorithm based on a non- Euclidean metric combined with a path merging algorithm that creates junctions between the different types of roads. Unlike previous work, our method allows high level user control by manipulating the density and the pattern of the network. The geometry of the highways, primary and secondary roads as well as the interchanges and intersections are automatically created from the graph structure by instantiating generic parameterized models.Item Shape Analysis with Subspace Symmetries(The Eurographics Association and Blackwell Publishing Ltd., 2011) Berner, Alexander; Wand, Michael; Mitra, Niloy J.; Mewes, Daniel; Seidel, Hans-Peter; M. Chen and O. DeussenWe address the problem of partial symmetry detection, i.e., the identification of building blocks a complex shape is composed of. Previous techniques identify parts that relate to each other by simple rigid mappings, similarity transforms, or, more recently, intrinsic isometries. Our approach generalizes the notion of partial symmetries to more general deformations. We introduce subspace symmetries whereby we characterize similarity by requiring the set of symmetric parts to form a low dimensional shape space. We present an algorithm to discover subspace symmetries based on detecting linearly correlated correspondences among graphs of invariant features. We evaluate our technique on various data sets. We show that for models with pronounced surface features, subspace symmetries can be found fully automatically. For complicated cases, a small amount of user input is used to resolve ambiguities. Our technique computes dense correspondences that can subsequently be used in various applications, such as model repair and denoising.Item Preface and Table of Contents(The Eurographics Association and Blackwell Publishing Ltd., 2011) Ravi Ramamoorthi and Erik ReinhardItem Prostate Cancer Visualization from MR Imagery and MR Spectroscopy(The Eurographics Association and Blackwell Publishing Ltd., 2011) Marino, Joseph; Kaufman, Arie; H. Hauser, H. Pfister, and J. J. van WijkProstate cancer is one of the most prevalent cancers among males, and the use of magnetic resonance imaging (MRI) has been suggested for its detection. A framework is presented for scoring and visualizing various MR data in an efficient and intuitive manner. A classification method is introduced where a cumulative score volume is created which takes into account each of three acquisition types. This score volume is integrated into a volume rendering framework which allows the user to view the prostate gland, the multi-modal score values, and the sur- rounding anatomy. A visibility persistence mode is introduced to automatically avoid full occlusion of a selected score and indicate overlaps. The use of GPU-accelerated multi-modal single-pass ray casting provides an inter- active experience. User driven importance rendering allows the user to gain insight into the data and can assist in localization of the disease and treatment planning. We evaluate our results against pathology and radiologists' determinations.Item Visual Coherence for Large-Scale Line-Plot Visualizations(The Eurographics Association and Blackwell Publishing Ltd., 2011) Muigg, Philipp; Hadwiger, Markus; Doleisch, Helmut; Gröller, Eduard; H. Hauser, H. Pfister, and J. J. van WijkDisplaying a large number of lines within a limited amount of screen space is a task that is common to many different classes of visualization techniques such as time-series visualizations, parallel coordinates, link-node diagrams, and phase-space diagrams. This paper addresses the challenging problems of cluttering and overdraw inherent to such visualizations. We generate a 2x2 tensor field during line rasterization that encodes the distribution of line orientations through each image pixel. Anisotropic diffusion of a noise texture is then used to generate a dense, coherent visualization of line orientation. In order to represent features of different scales, we employ a multi-resolution representation of the tensor field. The resulting technique can easily be applied to a wide variety of line-based visualizations. We demonstrate this for parallel coordinates, a time-series visualization, and a phase-space diagram. Furthermore, we demonstrate how to integrate a focus+context approach by incorporating a second tensor field. Our approach achieves interactive rendering performance for large data sets containing millions of data items, due to its image-based nature and ease of implementation on GPUs. Simulation results from computational fluid dynamics are used to evaluate the performance and usefulness of the proposed method.Item A New QEM for Parametrization of Raster Images(The Eurographics Association and Blackwell Publishing Ltd., 2011) Yin, Xuetao; Femiani, John; Wonka, Peter; Razdan, Anshuman; Eduard Groeller and Holly RushmeierWe present an image processing method that converts a raster image to a simplical two‐complex which has only a small number of vertices (base mesh) plus a parametrization that maps each pixel in the original image to a combination of the barycentric coordinates of the triangle it is finally mapped into. Such a conversion of a raster image into a base mesh plus parametrization can be useful for many applications such as segmentation, image retargeting, multi‐resolution editing with arbitrary topologies, edge preserving smoothing, compression, etc. The goal of the algorithm is to produce a base mesh such that it has a small colour distortion as well as high shape fairness, and a parametrization that is globally continuous visually and numerically. Inspired by multi‐resolution adaptive parametrization of surfaces and quadric error metric, the algorithm converts pixels in the image to a dense triangle mesh and performs error‐bounded simplification jointly considering geometry and colour. The eliminated vertices are projected to an existing face. The implementation is iterative and stops when it reaches a prescribed error threshold. The algorithm is feature‐sensitive, i.e. salient feature edges in the images are preserved where possible and it takes colour into account thereby producing a better quality triangulation.Item PaperVis: Literature Review Made Easy(The Eurographics Association and Blackwell Publishing Ltd., 2011) Chou, Jia -Kai; Yang, C. -K.; H. Hauser, H. Pfister, and J. J. van WijkReviewing literatures for a certain research field is always important for academics. One could use Google-like information seeking tools, but oftentimes he/she would end up obtaining too many possibly related papers, as well as the papers in the associated citation network. During such a process, a user may easily get lost after following a few links for searching or cross-referencing. It is also difficult for the user to identify relevant/important papers from the resulting huge collection of papers. Our work, called PaperVis, endeavors to provide a user-friendly interface to help users quickly grasp the intrinsic complex citation-reference structures among a specific group of papers. We modify the existing Radial Space Filling (RSF) and Bullseye View techniques to arrange involved papers as a node-link graph that better depicts the relationships among them while saving the screen space at the same time. PaperVis applies visual cues to present node attributes and their transitions among interactions, and it categorizes papers into semantically meaningful hierarchies to facilitate ensuing literature exploration. We conduct experiments on the InfoVis 2004 Contest Dataset to demonstrate the effectiveness of PaperVis.Item Least Squares Vertex Baking(The Eurographics Association and Blackwell Publishing Ltd., 2011) Kavan, Ladislav; Bargteil, Adam W.; Sloan, Peter-Pike; Ravi Ramamoorthi and Erik ReinhardWe investigate the representation of signals defined on triangle meshes using linearly interpolated vertex attributes. Compared to texture mapping, storing data only at vertices yields significantly lower memory overhead and less expensive runtime reconstruction. However, standard approaches to determine vertex values such as point sampling or averaging triangle samples lead to suboptimal approximations. We discuss how an optimal solution can be efficiently calculated using continuous least-squares. In addition, we propose a regularization term that allows us to minimize gradient discontinuities and mach banding artifacts while staying close to the optimum. Our method has been integrated in a game production lighting tool and we present examples of representing signals such as ambient occlusion and precomputed radiance transfer in real game scenes, where vertex baking was used to free up resources for other game components.Item Topology-based Visualization of Transformation Pathways in Complex Chemical Systems(The Eurographics Association and Blackwell Publishing Ltd., 2011) Beketayev, Kenes; Weber, G. H.; Haranczyk, M.; Bremer, P.-T.; Hlawitschka, M.; Hamann, B.; H. Hauser, H. Pfister, and J. J. van WijkStudying transformation in a chemical system by considering its energy as a function of coordinates of the system's components provides insight and changes our understanding of this process. Currently, a lack of effective visualization techniques for high-dimensional energy functions limits chemists to plot energy with respect to one or two coordinates at a time. In some complex systems, developing a comprehensive understanding requires new visualization techniques that show relationships between all coordinates at the same time. We propose a new visualization technique that combines concepts from topological analysis, multi-dimensional scaling, and graph layout to enable the analysis of energy functions for a wide range of molecular structures. We demonstrate our technique by studying the energy function of a dimer of formic and acetic acids and a LTA zeolite structure, in which we consider diffusion of methane.Item A Multiscale Metric for 3D Mesh Visual Quality Assessment(The Eurographics Association and Blackwell Publishing Ltd., 2011) Lavoué, Guillaume; Mario Botsch and Scott SchaeferMany processing operations are nowadays applied on 3D meshes like compression, watermarking, remeshing and so forth; these processes are mostly driven and/or evaluated using simple distortion measures like the Hausdorff distance and the root mean square error, however these measures do not correlate with the human visual perception while the visual quality of the processed meshes is a crucial issue. In that context we introduce a full-reference 3D mesh quality metric; this metric can compare two meshes with arbitrary connectivity or sampling density and produces a score that predicts the distortion visibility between them; a visual distortion map is also created. Our metric outperforms its counterparts from the state of the art, in term of correlation with mean opinion scores coming from subjective experiments on three existing databases. Additionally, we present an application of this new metric to the improvement of rate-distortion evaluation of recent progressive compression algorithms.Item A Volumetric Approach to Predictive Rendering of Fabrics(The Eurographics Association and Blackwell Publishing Ltd., 2011) Schröder, Kai; Klein, Reinhard; Zinke, Arno; Ravi Ramamoorthi and Erik ReinhardEfficient physically accurate modeling and rendering of woven cloth at a yarn level is an inherently complicated task due to the underlying geometrical and optical complexity. In this paper, a novel and general approach to physically accurate cloth rendering is presented. By using a statistical volumetric model approximating the distribution of yarn fibers, a prohibitively costly explicit geometrical representation is avoided. As a result, accurate rendering of even large pieces of fabrics containing orders of magnitudes more fibers becomes practical without sacrifying much generality compared to fiber-based techniques. By employing the concept of local visibility and introducing the effective fiber density, limitations of existing volumetric approaches regarding self-shadowing and fiber density estimation are greatly reduced.Item Real Time Edit Propagation by Efficient Sampling(The Eurographics Association and Blackwell Publishing Ltd., 2011) Bie, Xiaohui; Huang, Haoda; Wang, Wencheng; Bing-Yu Chen, Jan Kautz, Tong-Yee Lee, and Ming C. LinIt is popular to edit the appearance of images using strokes, owing to their ease of use and convenience of conveying the user's intention. However, propagating the user inputs to the rest of the images requires solving an enormous optimization problem, which is very time consuming, thus preventing its practical use. In this paper, a two-step edit propagation scheme is proposed, first to solve edits on clusters of similar pixels and then to interpolate individual pixel edits from cluster edits. The key in our scheme is that we use efficient stroke sampling to compute the affinity between image pixels and strokes. Based on this, our clustering does not need to be strokeadaptive and thus the number of clusters is greatly reduced, resulting in a significant speedup. The proposed method has been tested on various images, and the results show that it is more than one order of magnitude faster than existing methods, while still achieving precise results compared with the ground truth. Moreover, its efficiency is not sensitive to the number of strokes, making it suitable for performing dense edits in practice.Item Visualization of Time-Series Data in Parameter Space for Understanding Facial Dynamics(The Eurographics Association and Blackwell Publishing Ltd., 2011) Tam, Gary K. L.; Fang, H.; Aubrey, A. J.; Grant, P. W.; Rosin, P. L.; Marshall, D.; Chen, M.; H. Hauser, H. Pfister, and J. J. van WijkOver the past decade, computer scientists and psychologists have made great efforts to collect and analyze facial dynamics data that exhibit different expressions and emotions. Such data is commonly captured as videos and are transformed into feature-based time-series prior to any analysis. However, the analytical tasks, such as expression classification, have been hindered by the lack of understanding of the complex data space and the associated algorithm space. Conventional graph-based time-series visualization is also found inadequate to support such tasks. In this work, we adopt a visual analytics approach by visualizing the correlation between the algorithm space and our goal classifying facial dynamics. We transform multiple feature-based time-series for each expression in measurement space to a multi-dimensional representation in parameter space. This enables us to utilize parallel coordinates visualization to gain an understanding of the algorithm space, providing a fast and cost-effective means to support the design of analytical algorithms.