Volume 31 (2012)
Permanent URI for this community
Browse
Browsing Volume 31 (2012) by Issue Date
Now showing 1 - 20 of 249
Results Per Page
Sort Options
Item Procedural Texture Preview(The Eurographics Association and John Wiley and Sons Ltd., 2012) Lasram, Anass; Lefebvre, Sylvain; Damez, Cyrille; P. Cignoni and T. ErtlProcedural textures usually require spending time testing parameters to realize the diversity of appearances. This paper introduces the idea of a procedural texture preview: A single static image summarizing in a limited pixel space the appearances produced by a given procedure. Unlike grids of thumbnails our previews present a continuous image of appearances, analog to a map. The main challenge is to ensure that most appearances are visible, are allocated a similar pixel area, and are ordered in a smooth manner throughout the preview. To reach this goal, we introduce a new layout algorithm accounting simultaneously for these criteria. After computing a layout of appearances, we rely on by-example texture synthesis to produce the final preview. We demonstrate our approach on a database of production-level procedural textures.Item A Quantized Boundary Representation of 2D Flows(The Eurographics Association and Blackwell Publishing Ltd., 2012) Levine, Joshua; Jadhav, Shreeraj; Bhatia, Harsh; Pascucci, Valerio; Bremer, Peer-Timo; S. Bruckner, S. Miksch, and H. PfisterAnalysis and visualization of complex vector fields remain major challenges when studying large scale simulation of physical phenomena. The primary reason is the gap between the concepts of smooth vector field theory and their computational realization. In practice, researchers must choose between either numerical techniques, with limited or no guarantees on how they preserve fundamental invariants, or discrete techniques which limit the precision at which the vector field can be represented. We propose a new representation of vector fields that combines the advantages of both approaches. In particular, we represent a subset of possible streamlines by storing their paths as they traverse the edges of a triangulation. Using only a finite set of streamlines creates a fully discrete version of a vector field that nevertheless approximates the smooth flow up to a user controlled error bound. The discrete nature of our representation enables us to directly compute and classify analogues of critical points, closed orbits, and other common topological structures. Further, by varying the number of divisions (quantizations) used per edge, we vary the resolution used to represent the field, allowing for controlled precision. This representation is compact in memory and supports standard vector field operations.Item Spatio-Temporal Filtering of Indirect Lighting for Interactive Global Illumination(The Eurographics Association and Blackwell Publishing Ltd., 2012) Chen, Ying-Chieh; Lei, Su Ian Eugene; Chang, Chun-Fa; Holly Rushmeier and Oliver DeussenWe introduce a screen‐space statistical filtering method for real‐time rendering with global illumination. It is inspired by statistical filtering proposed by Meyer et al. to reduce the noise in global illumination over a period of time by estimating the principal components from all rendered frames. Our work extends their method to achieve nearly real-time performance on modern GPUs. More specifically, our method employs the candid covariance‐free incremental PCA to overcome several limitations of the original algorithm by Meyer et al., such as its high computational cost and memory usage that hinders its implementation on GPUs. By combining the reprojection and per‐pixel weighting techniques, our method handles the view changes and object movement in dynamic scenes as well.Item Fast and Robust Normal Estimation for Point Clouds with Sharp Features(The Eurographics Association and Blackwell Publishing Ltd., 2012) Boulch, Alexandre; Marlet, Renaud; Eitan Grinspun and Niloy MitraThis paper presents a new method for estimating normals on unorganized point clouds that preserves sharp features. It is based on a robust version of the Randomized Hough Transform (RHT). We consider the filled Hough transform accumulator as an image of the discrete probability distribution of possible normals. The normals we estimate corresponds to the maximum of this distribution. We use a fixed-size accumulator for speed, statistical exploration bounds for robustness, and randomized accumulators to prevent discretization effects. We also propose various sampling strategies to deal with anisotropy, as produced by laser scans due to differences of incidence. Our experiments show that our approach offers an ideal compromise between precision, speed, and robustness: it is at least as precise and noise-resistant as state-of-the-art methods that preserve sharp features, while being almost an order of magnitude faster. Besides, it can handle anisotropy with minor speed and precision losses.Item Isotropic Surface Remeshing Using Constrained Centroidal Delaunay Mesh(The Eurographics Association and Blackwell Publishing Ltd., 2012) Chen, Zhonggui; Cao, Juan; Wang, Wenping; C. Bregler, P. Sander, and M. WimmerWe develop a novel isotropic remeshing method based on constrained centroidal Delaunay mesh (CCDM), a generalization of centroidal patch triangulation from 2D to mesh surface. Our method starts with resampling an input mesh with a vertex distribution according to a user-defined density function. The initial remeshing result is then progressively optimized by alternatively recovering the Delaunay mesh and moving each vertex to the centroid of its 1-ring neighborhood. The key to making such simple iterations work is an efficient optimization framework that combines both local and global optimization methods. Our method is parameterization-free, thus avoiding the metric distortion introduced by parameterization, and generating more well-shaped triangles. Our method guarantees that the topology of surface is preserved without requiring geodesic information. We conduct various experiments to demonstrate the simplicity, efficacy, and robustness of the presented method.Item How Not to Be Seen -- Object Removal from Videos of Crowded Scenes(The Eurographics Association and John Wiley and Sons Ltd., 2012) Granados, Miguel; Tompkin, James; Kim, Kwang In; Grau, Oliver; Kautz, Jan; Theobalt, Christian; P. Cignoni and T. ErtlRemoving dynamic objects from videos is an extremely challenging problem that even visual effects professionals often solve with time-consuming manual frame-by-frame editing. We propose a new approach to video completion that can deal with complex scenes containing dynamic background and non-periodical moving objects. We build upon the idea that the spatio-temporal hole left by a removed object can be filled with data available on other regions of the video where the occluded objects were visible. Video completion is performed by solving a large combinatorial problem that searches for an optimal pattern of pixel offsets from occluded to unoccluded regions. Our contribution includes an energy functional that generalizes well over different scenes with stable parameters, and that has the desirable convergence properties for a graph-cut-based optimization. We provide an interface to guide the completion process that both reduces computation time and allows for efficient correction of small errors in the result. We demonstrate that our approach can effectively complete complex, high-resolution occlusions that are greater in difficulty than what existing methods have shown.Item Geometry Presorting for Implicit Object Space Partitioning(The Eurographics Association and Blackwell Publishing Ltd., 2012) Eisemann, Martin; Bauszat, Pablo; Guthe, Stefan; Magnor, Marcus; Fredo Durand and Diego GutierrezWe present a new data structure for object space partitioning that can be represented completely implicitly. The bounds of each node in the tree structure are recreated at run-time from the scene objects contained therein. By applying a presorting procedure to the geometry, only a known fraction of the geometry is needed to locate the bounding planes of any node. We evaluate the impact of the implicit bounding plane representation and compare our algorithm to a classic bounding volume hierarchy. Though the representation is completely implicit, we still achieve interactive frame rates on commodity hardware.Item Finding Surface Correspondences Using Symmetry Axis Curves(The Eurographics Association and Blackwell Publishing Ltd., 2012) Liu, Tianqiang; Kim, Vladimir G.; Funkhouser, Thomas; Eitan Grinspun and Niloy MitraIn this paper, we propose an automatic algorithm for finding a correspondence map between two 3D surfaces. The key insight is that global reflective symmetry axes are stable, recognizable, semantic features of most real-world surfaces. Thus, it is possible to find a useful map between two surfaces by first extracting symmetry axis curves, aligning the extracted curves, and then extrapolating correspondences found on the curves to both surfaces. The main advantages of this approach are efficiency and robustness: the difficult problem of finding a surface map is reduced to three significantly easier problems: symmetry detection, curve alignment, and correspondence extrapolation, each of which has a robust, polynomial-time solution (e.g., optimal alignment of 1D curves is possible with dynamic programming). We investigate of this approach on a wide range of examples, including both intrinsically symmetric surfaces and polygon soups, and find that it is superior to previous methods in cases where two surfaces have different overall shapes but similar reflective symmetry axes, a common case in computer graphics.Item Efficiently Simulating the Bokeh of Polygonal Apertures in a Post‐Process Depth of Field Shader(The Eurographics Association and Blackwell Publishing Ltd., 2012) McIntosh, L.; Riecke, B. E.; DiPaola, S.; Holly Rushmeier and Oliver DeussenThe effect of aperture shape on an image, known in photography as ‘bokeh’, is an important characteristic of depth of field in real‐world cameras. However, most real‐time depth of field techniques produce Gaussian bokeh rather than the circular or polygonal bokeh that is almost universal in real‐world cameras. ‘Scattering’ (i.e. point‐splatting) techniques provide a flexible way to model any aperture shape, but tend to have prohibitively slow performance, and require geometry‐shaders or significant engine changes to implement. This paper shows that simple post‐process ‘gathering’ depth of field shaders can be easily extended to simulate certain bokeh effects. Specifically we show that it is possible to efficiently model the bokeh effects of square, hexagonal and octagonal apertures using a novel separable filtering approach. Performance data from a video game engine test demonstrates that our shaders attain much better frame rates than a naive non‐separable approach.The effect of aperture shape on an image, known in photography as ‘bokeh’, is an important characteristic of depth of field in real‐world cameras. However, most real‐time depth of field techniques produce Gaussian bokeh rather than the circular or polygonal bokeh that is almost universal in real‐world cameras. ‘Scattering’ (i.e. point‐splatting) techniques provide a flexible way to model any aperture shape, but tend to have prohibitively slow performance, and require geometry‐shaders or significant engine changes to implement. This paper shows that simple post‐process ‘gathering’ depth of field shaders can be easily extended to simulate certain bokeh effects.Item SHADOWPIX: Multiple Images from Self Shadowing(The Eurographics Association and John Wiley and Sons Ltd., 2012) Bermano, Amit; Baran, Ilya; Alexa, Marc; Matusik, Wojciech; P. Cignoni and T. ErtlSHADOWPIX are white surfaces that display several prescribed images formed by the self-shadowing of the surface when lit from certain directions. The effect is surprising and not commonly seen in the real world. We present algorithms for constructing SHADOWPIX that allow up to four images to be embedded in a single surface. SHADOWPIX can produce a variety of unusual effects depending on the embedded images: moving the light can animate or relight the object in the image, or three colored lights may be used to produce a single colored image. SHADOWPIX are easy to manufacture using a 3D printer and we present photographs, videos, and renderings demonstrating these effects.Item 33rd EUROGRAPHICS General Assembly(The Eurographics Association and Blackwell Publishing Ltd., 2012) Holly Rushmeier and Oliver DeussenItem Cultural Heritage Predictive Rendering(The Eurographics Association and Blackwell Publishing Ltd., 2012) Happa, Jassim; Bashford-Rogers, Tom; Wilkie, Alexander; Artusi, Alessandro; Debattista, Kurt; Chalmers, Alan; Holly Rushmeier and Oliver DeussenHigh‐fidelity rendering can be used to investigate Cultural Heritage (CH) sites in a scientifically rigorous manner. However, a high degree of realism in the reconstruction of a CH site can be misleading insofar as it can be seen to imply a high degree of certainty about the displayed scene—which is frequently not the case, especially when investigating the past. So far, little effort has gone into adapting and formulating a Predictive Rendering pipeline for CH research applications. In this paper, we first discuss the goals and the workflow of CH reconstructions in general, as well as those of traditional Predictive Rendering. Based on this, we then propose a research framework for CH research, which we refer to as ‘Cultural Heritage Predictive Rendering’ (CHPR). This is an extension to Predictive Rendering that introduces a temporal component and addresses uncertainty that is important for the scene’s historical interpretation. To demonstrate these concepts, two example case studies are detailed.High‐fidelity rendering can be used to investigate Cultural Heritage (CH) sites in a scientifically rigorous manner. However, a high degree of realism in the reconstruction of a CH site can be misleading insofar as it can be seen to imply a high degree of certainty about the displayed scene‐which is frequently not the case, especially when investigating the past. So far, little effort has gone into adapting and formulating a Predictive Rendering pipeline for CH research applications. In this paper, we first discuss the goals and the workflow of CH reconstructions in general, as well as those of traditional Predictive Rendering. Based on this, we then propose a research framework for CH research, which we refer to as ‘Cultural Heritage Predictive Rendering’ (CHPR).Item Shape-Up: Shaping Discrete Geometry with Projections(The Eurographics Association and Blackwell Publishing Ltd., 2012) Bouaziz, Sofien; Deuss, Mario; Schwartzburg, Yuliy; Weise, Thibaut; Pauly, Mark; Eitan Grinspun and Niloy MitraWe introduce a unified optimization framework for geometry processing based on shape constraints. These constraints preserve or prescribe the shape of subsets of the points of a geometric data set, such as polygons, one-ring cells, volume elements, or feature curves. Our method is based on two key concepts: a shape proximity function and shape projection operators. The proximity function encodes the distance of a desired least-squares fitted elementary target shape to the corresponding vertices of the 3D model. Projection operators are employed to minimize the proximity function by relocating vertices in a minimal way to match the imposed shape constraints. We demonstrate that this approach leads to a simple, robust, and efficient algorithm that allows implementing a variety of geometry processing applications, simply by combining suitable projection operators. We show examples for computing planar and circular meshes, shape space exploration, mesh quality improvement, shape-preserving deformation, and conformal parametrization. Our optimization framework provides a systematic way of building new solvers for geometry processing and produces similar or better results than state-of-the-art methods.Item Drawing Large Graphs by Low-Rank Stress Majorization(The Eurographics Association and Blackwell Publishing Ltd., 2012) Khoury, Marc; Hu, Yifan; Krishnan, Shankar; Scheidegger, Carlos; S. Bruckner, S. Miksch, and H. PfisterOptimizing a stress model is a natural technique for drawing graphs: one seeks an embedding into Rd which best preserves the induced graph metric. Current approaches to solving the stress model for a graph with jVj nodes and jEj edges require the full all-pairs shortest paths (APSP) matrix, which takes O(jVj2 log jEj+jVjjEj) time and O(jVj2) space. We propose a novel algorithm based on a low-rank approximation to the required matrices. The crux of our technique is an observation that it is possible to approximate the full APSP matrix, even when only a small subset of its entries are known. Our algorithm takes time O(kjVj+jVj logjVj+jEj) per iteration with a preprocessing time of O(k3 +k(jEj+jVj logjVj)+k2jVj) and memory usage of O(kjVj), where a user-defined parameter k trades off quality of approximation with running time and space. We give experimental results which show, to the best of our knowledge, the largest (albeit approximate) full stress model based layouts to date.Item Wetting Effects in Hair Simulation(The Eurographics Association and Blackwell Publishing Ltd., 2012) Rungjiratananon, Witawat; Kanamori, Yoshihiro; Nishita, Tomoyuki; C. Bregler, P. Sander, and M. WimmerThere is considerable recent progress in hair simulations, driven by the high demands in computer animated movies. However, capturing the complex interactions between hair and water is still relatively in its infancy. Such interactions are best modeled as those between water and an anisotropic permeable medium as water can flow into and out of the hair volume biased in hair fiber direction. Modeling the interaction is further challenged when the hair is allowed to move. In this paper, we introduce a simulation model that reproduces interactions between water and hair as a dynamic anisotropic permeable material. We utilize an Eulerian approach for capturing the microscopic porosity of hair and handle the wetting effects using a Cartesian bounding grid. A Lagrangian approach is used to simulate every single hair strand including interactions with each other, yielding fine-detailed dynamic hair simulation. Our model and simulation generate many interesting effects of interactions between fine-detailed dynamic hair and water, i.e., water absorption and diffusion, cohesion of wet hair strands, water flow within the hair volume, water dripping from the wet hair strands and morphological shape transformations of wet hair.Item Importance Caching for Complex Illumination(The Eurographics Association and John Wiley and Sons Ltd., 2012) Georgiev, Iliyan; Krivánek, Jaroslav; Popov, Stefan; Slusallek, Philipp; P. Cignoni and T. ErtlRealistic rendering requires computing the global illumination in the scene, and Monte Carlo integration is the best-known method for doing that. The key to good performance is to carefully select the costly integration samples, which is usually achieved via importance sampling. Unfortunately, visibility is difficult to factor into the importance distribution, which can greatly increase variance in highly occluded scenes with complex illumination. In this paper, we present importance caching - a novel approach that selects those samples with a distribution that includes visibility, while maintaining efficiency by exploiting illumination smoothness. At a sparse set of locations in the scene, we construct and cache several types of probability distributions with respect to a set of virtual point lights (VPLs), which notably include visibility. Each distribution type is optimized for a specific lighting condition. For every shading point, we then borrow the distributions from nearby cached locations and use them for VPL sampling, avoiding additional bias. A novel multiple importance sampling framework finally combines the many estimators. In highly occluded scenes, where visibility is a major source of variance in the incident radiance, our approach can reduce variance by more than an order of magnitude. Even in such complex scenes we can obtain accurate and low noise previews with full global illumination in a couple of seconds on a single mid-range CPU.Item A Cell-Based Light Interaction Model for Human Blood(The Eurographics Association and John Wiley and Sons Ltd., 2012) Yim, Daniel; Baranoski, Gladimir V. G.; Kimmel, Brad W.; Chen, T. Francis; Miranda, Erik; P. Cignoni and T. ErtlThe development of predictive appearance models for organic tissues is a challenging task due to the inherent complexity of these materials. In this paper, we closely examine the biophysical processes responsible for the appearance attributes of whole blood, one the most fundamental of these materials. We describe a new appearance model that simulates the mechanisms of light propagation and absorption within the cellular and fluid portions of this specialized tissue. The proposed model employs a comprehensive, and yet flexible first principles approach based on the morphological, optical and biochemical properties of blood cells. This approach allows for environment driven changes in the cells' anatomy and orientation to be appropriately included into the light transport simulations. The correctness and predictive capabilities of the proposed model are quantitatively and qualitatively evaluated through comparisons of modeled results with actual measured data and experimental observations reported in the scientific literature. Its incorporation into rendering systems is illustrated through images of blood samples depicting appearance variations controlled by physiologically meaningful parameters. Besides the contributions to the modeling of material appearance, the research presented in this paper is also expected to have applications in a wide range of biomedical areas, from optical diagnostics to the visualization and noninvasive imaging of blood-perfused tissues.Item Low‐Complexity Intervisibility in Height Fields(The Eurographics Association and Blackwell Publishing Ltd., 2012) Timonen, Ville; Holly Rushmeier and Oliver DeussenGlobal illumination systems require intervisibility information between pairs of points in a scene. This visibility problem is computationally complex, and current interactive implementations for dynamic scenes are limited to crude approximations or small amounts of geometry. We present a novel algorithm to determine intervisibility from all points of dynamic height fields as visibility horizons in discrete azimuthal directions. The algorithm determines accurate visibility along each azimuthal direction in time linear in the number of output visibility horizons. This is achieved by using a novel visibility structure we call the convex hull tree. The key feature of our algorithm is its ability to incrementally update the convex hull tree such that at each receiver point only the visible parts of the height field are traversed. This results in low time complexity; compared to previous work, we achieve two orders of magnitude reduction in the number of algorithm iterations and a speedup of 2.4 to 41 onItem NoRM: No-Reference Image Quality Metric for Realistic Image Synthesis(The Eurographics Association and John Wiley and Sons Ltd., 2012) Herzog, Robert; Cadík, Martin; Aydin, Tunç O.; Kim, Kwang In; Myszkowski, Karol; Seidel, Hans-Peter; P. Cignoni and T. ErtlSynthetically generating images and video frames of complex 3D scenes using some photo-realistic rendering software is often prone to artifacts and requires expert knowledge to tune the parameters. The manual work required for detecting and preventing artifacts can be automated through objective quality evaluation of synthetic images. Most practical objective quality assessment methods of natural images rely on a ground-truth reference, which is often not available in rendering applications. While general purpose no-reference image quality assessment is a difficult problem, we show in a subjective study that the performance of a dedicated no-reference metric as presented in this paper can match the state-of-the-art metrics that do require a reference. This level of predictive power is achieved exploiting information about the underlying synthetic scene (e.g., 3D surfaces, textures) instead of merely considering color, and training our learning framework with typical rendering artifacts. We show that our method successfully detects various non-trivial types of artifacts such as noise and clamping bias due to insufficient virtual point light sources, and shadow map discretization artifacts. We also briefly discuss an inpainting method for automatic correction of detected artifacts.Item Exploring Different Parameters to Assess Left Ventricle Global and Regional Functional Analysis from Coronary CT Angiography(The Eurographics Association and Blackwell Publishing Ltd., 2012) Silva, Samuel; Santos, Beatriz Sousa; Madeira, Joaquim; Holly Rushmeier and Oliver DeussenCoronary CT angiography is widely used in clinical practice for the assessment of coronary artery disease. Several studies have shown that the same exam can also be used to assess left ventricle (LV) function. Even though coronary CT angiography provides data concerning multiple cardiac phases, along the cardiac cycle, LV function is usually evaluated using just the end‐systolic and end‐diastolic phases. This unused wealth of data, mostly due to its complexity and the lack of proper tools, has still to be explored to assess if further insight is possible regarding regional LV functional analysis. Furthermore, different parameters can be computed to characterize LV function and though some are well known by clinicians others still need to be tested concerning their value in clinical scenarios. Based on these premises, we present several parameters characterizing global and regional LV function, computed for several cardiac phases over one cardiac cycle. The data provided by the computed parameters is shown using a set of visualizations allowing synchronized visual exploration of the different data. The main purpose is to provide means for clinicians to explore the data and gather insight over their meaning and their correlation with each other and with diagnosis outcomes.