31-Issue 8
Permanent URI for this collection
Browse
Browsing 31-Issue 8 by Title
Now showing 1 - 19 of 19
Results Per Page
Sort Options
Item Biharmonic Coordinates(The Eurographics Association and Blackwell Publishing Ltd., 2012) Weber, Ofir; Poranne, Roi; Gotsman, Craig; Holly Rushmeier and Oliver DeussenBarycentric coordinates are an established mathematical tool in computer graphics and geometry processing, providing a convenient way of interpolating scalar or vector data from the boundary of a planar domain to its interior. Many different recipes for barycentric coordinates exist, some offering the convenience of a closed‐form expression, some providing other desirable properties at the expense of longer computation times. For example, harmonic coordinates, which are solutions to the Laplace equation, provide a long list of desirable properties (making them suitable for a wide range of applications), but lack a closed‐form expression.We derive a new type of barycentric coordinates based on solutions to the biharmonic equation. These coordinates can be considered a natural generalization of harmonic coordinates, with the additional ability to interpolate boundary derivative data. We provide an efficient and accurate way to numerically compute the biharmonic coordinates and demonstrate their advantages over existing schemes. We show that biharmonic coordinates are especially appealing for (but not limited to) 2D shape and image deformation and have clear advantages over existing deformation methods.Barycentric coordinates are an established mathematical tool in computer graphics and geometry processing, providing a convenient way of interpolating scalar or vector data from the boundary of a planar domain to its interior. We derive a new type of barycentric coordinates based on solutions to the biharmonic equation. These coordinates can be considered a natural generalization of harmonic coordinates, with the additional ability to interpolate boundary derivative data.Item CageR: Cage‐Based Reverse Engineering of Animated 3D Shapes(The Eurographics Association and Blackwell Publishing Ltd., 2012) Thiery, Jean‐Marc; Tierny, Julien; Boubekeur, Tamy; Holly Rushmeier and Oliver DeussenWe present CageR: A novel framework for converting animated 3D shape sequences into compact and stable cage‐based representations. Given a raw animated sequence with one‐to‐one point correspondences together with an initial cage embedding, our algorithm automatically generates smoothly varying cage embeddings which faithfully reconstruct the enclosed object deformation. Our technique is fast, automatic, oblivious to the cage coordinate system, provides controllable error and exploits a GPU implementation. At the core of our method, we introduce a new algebraic algorithm based on maximum volume sub‐matrices (maxvol) to speed up and stabilize the deformation inversion. We also present a new spectral regularization algorithm that can apply arbitrary regularization terms on selected subparts of the inversion spectrum. This step allows to enforce a highly localized cage regularization, guaranteeing its smooth variation along the sequence. We demonstrate the speed, accuracy and robustness of our framework on various synthetic and acquired data sets. The benefits of our approach are illustrated in applications such as animation compression and post‐editing.We present a novel framework for converting animated 3D shape sequences into compact and stable cage‐based representations. Given a raw animated sequence with one‐to‐one point correspondences together with an initial cage embedding, our algorithm automatically generates smoothly varying cage embeddings which faithfully reconstruct the enclosed object deformation. Our technique is fast, automatic, oblivious to the cage coordinate system, provides controllable error and exploits a GPU implementation. The benefits of our approach are illustrated in applications such as animation compression and post‐editing.Item Comparison of Four Subjective Methods for Image Quality Assessment(The Eurographics Association and Blackwell Publishing Ltd., 2012) Mantiuk, Rafał K.; Tomaszewska, Anna; Mantiuk, Radosław; Holly Rushmeier and Oliver DeussenTo provide a convincing proof that a new method is better than the state of the art, computer graphics projects are often accompanied by user studies, in which a group of observers rank or rate results of several algorithms. Such user studies, known as subjective image quality assessment experiments, can be very time‐consuming and do not guarantee to produce conclusive results. This paper is intended to help design efficient and rigorous quality assessment experiments and emphasise the key aspects of the results analysis. To promote good standards of data analysis, we review the major methods for data analysis, such as establishing confidence intervals, statistical testing and retrospective power analysis. Two methods of visualising ranking results together with the meaningful information about the statistical and practical significance are explored. Finally, we compare four most prominent subjective quality assessment methods: single‐stimulus, double‐stimulus, forced‐choice pairwise comparison and similarity judgements. We conclude that the forced‐choice pairwise comparison method results in the smallest measurement variance and thus produces the most accurate results. This method is also the most time‐efficient, assuming a moderate number of compared conditions.To provide a convincing proof that a new method is better than the state‐of‐the‐art, computer graphics projects are often accompanied by user studies, in which a group of observers rank or rate results of several algorithms. Such user studies, known as subjective image quality assessment experiments, can be very time consuming and do not guarantee to produce conclusive results. This paper is intended to help design efficient and rigorous quality assessment experiments and emphasise the key aspects of the results analysis.Item Computer Assisted Relief Generation—A Survey(The Eurographics Association and Blackwell Publishing Ltd., 2012) Kerber, J.; Wang, M.; Chang, J.; Zhang, J. J.; Belyaev, A.; Seidel, H.‐P.; Holly Rushmeier and Oliver DeussenIn this paper, we present an overview of the achievements accomplished to date in the field of computer‐aided relief generation. We delineate the problem, classify different solutions, analyse similarities, investigate developments and review the approaches according to their particular relative strengths and weaknesses. Moreover, we describe remaining challenges and point out prospective extensions. In consequence, this survey is addressed to both researchers and artists, through providing valuable insights into the theory behind the different concepts in this field and augmenting the options available among the methods presented with regard to practical application.In this paper we present an overview of the achievements accomplished to date in the field of computer aided relief generation. We delineate the problem, classify different solutions, analyze similarities, investigate developments and review the approaches according to their particular relative strengths and weaknesses. Moreover, we describe remaining challenges and point out prospective extensions. In consequence, this survey is addressed to both researchers and artists, through providing valuable insights into the theory behind the different concepts in this field and augmenting the options available among the methods presented with regard to practical application.Item Content‐Aware Automatic Photo Enhancement(The Eurographics Association and Blackwell Publishing Ltd., 2012) Kaufman, Liad; Lischinski, Dani; Werman, Michael; Holly Rushmeier and Oliver DeussenAutomatic photo enhancement is one of the long‐standing goals in image processing and computational photography. While a variety of methods have been proposed for manipulating tone and colour, most automatic methods used in practice, operate on the entire image without attempting to take the content of the image into account. In this paper, we present a new framework for automatic photo enhancement that attempts to take local and global image semantics into account. Specifically, our content‐aware scheme attempts to detect and enhance the appearance of human faces, blue skies with or without clouds and underexposed salient regions. A user study was conducted that demonstrates the effectiveness of the proposed approach compared to existing auto‐enhancement tools.Automatic photo enhancement is one of the longstanding goals in image processing and computational photography. While a variety of methods have been proposed for manipulating tone and color, most automatic methods used in practice, operate on the entire image without attempting to take the content of the image into account. In this paper we present a new framework for automatic photo enhancement that attempts to take local and global image semantics into account. Specifically, our content‐aware scheme attempts to detect and enhance the appearance of human faces, blue skies with or without clouds, and underexposed salient regions. A user study was conducted that demonstrates the effectiveness of the proposed approach compared to existing auto‐enhancement tools.Item Data‐Parallel Decompression of Triangle Mesh Topology(The Eurographics Association and Blackwell Publishing Ltd., 2012) Meyer, Quirin; Keinert, Benjamin; Sußner, Gerd; Stamminger, Marc; Holly Rushmeier and Oliver DeussenWe propose a lossless, single‐rate triangle mesh topology codec tailored for fast data‐parallel GPU decompression. Our compression scheme coherently orders generalized triangle strips in memory. To unpack generalized triangle strips efficiently, we propose a novel parallel and scalable algorithm. We order vertices coherently to further improve our compression scheme. We use a variable bit‐length code for additional compression benefits, for which we propose a scalable data‐parallel decompression algorithm. For a set of standard benchmark models, we obtain (min: 3.7, med: 4.6, max: 7.6) bits per triangle. Our CUDA decompression requires only about 15% of the time it takes to render the model even with a simple shader.We propose a lossless, single‐rate triangle mesh topology codec tailored for fast data‐parallel GPU decompression. Our compression scheme coherently orders generalized triangle strips in memory. To unpack generalized triangle strips efficiently, we propose a novel parallel and scalable algorithm. We order vertices coherently to further improve our compression scheme. We use a variable bit‐length code for additional compression benefits, for which we propose a scalable data‐parallel decompression algorithm. For a set of standard benchmark models, we obtain (min: 3.7, med: 4.6, max: 7.6) bits per triangle. Our CUDA decompression requires only about 15% of the time it takes to render the model even with a simple shader.Item Dependency‐Free Parallel Progressive Meshes(The Eurographics Association and Blackwell Publishing Ltd., 2012) Derzapf, E.; Guthe, M.; Holly Rushmeier and Oliver DeussenThe constantly increasing complexity of polygonal models in interactive applications poses two major problems. First, the number of primitives that can be rendered at real‐time frame rates is currently limited to a few million. Secondly, less than 45 million triangles—with vertices and normal—can be stored per gigabyte. Although the rendering time can be reduced using level‐of‐detail (LOD) algorithms, representing a model at different complexity levels, these often even increase memory consumption. Out‐of‐core algorithms solve this problem by transferring the data currently required for rendering from external devices. Compression techniques are commonly used because of the limited bandwidth. The main problem of compression and decompression algorithms is the only coarse‐grained random access. A similar problem occurs in view‐dependent LOD techniques. Because of the interdependency of split operations, the adaption rate is reduced leading to visible popping artefacts during fast movements. In this paper, we propose a novel algorithm for real‐time view‐dependent rendering of gigabyte‐sized models. It is based on a neighbourhood dependency‐free progressive mesh data structure. Using a per operation compression method, it is suitable for parallel random‐access decompression and out‐of‐core memory management without storing decompressed data.The constantly increasing complexity of polygonal models in interactive applications poses two major problems. First, the number of primitives that can be rendered at real‐time frame rates is currently limited to a few million. Second, less than 45 million triangles with vertices and normal can be stored per gigabyte. While the rendering time can be reduced using level‐of‐detail algorithms, representing a model at different complexity levels, these often even increase memory consumption. Out‐of‐core algorithms solve this problem by transferring the data currently required for rendering from external devices. In this paper, we propose a novel algorithm for real‐time view‐dependent rendering of gigabyte‐sized models.Item Editorial(The Eurographics Association and Blackwell Publishing Ltd., 2012) Holly Rushmeier and Oliver DeussenItem Extraction of Dominant Extremal Structures in Volumetric Data Using Separatrix Persistence(The Eurographics Association and Blackwell Publishing Ltd., 2012) Günther, D.; Seidel, H.‐P.; Weinkauf, T.; Holly Rushmeier and Oliver DeussenExtremal lines and surfaces are features of a 3D scalar field where the scalar function becomes minimal or maximal with respect to a local neighborhoodItem Geodesic Polar Coordinates on Polygonal Meshes(The Eurographics Association and Blackwell Publishing Ltd., 2012) Melvær, Eivind Lyche; Reimers, Martin; Holly Rushmeier and Oliver DeussenGeodesic Polar Coordinates (GPCs) on a smooth surfaceItem Interactive Character Animation Using Simulated Physics: A State‐of‐the‐Art Review(The Eurographics Association and Blackwell Publishing Ltd., 2012) Geijtenbeek, T.; Pronost, N.; Holly Rushmeier and Oliver DeussenPhysics simulation offers the possibility of truly responsive and realistic animation. Despite wide adoption of physics simulation for the animation of passive phenomena, such as fluids, cloths and rag‐doll characters, commercial applications still resort to kinematics‐based approaches for the animation of actively controlled characters. However, following a renewed interest in the use of physics simulation for interactive character animation, many recent publications demonstrate tremendous improvements in robustness, visual quality and usability. We present a structured review of over two decades of research on physics‐based character animation, as well as point out various open research areas and possible future directions.Physics simulation offers the possibility of truly responsive and realistic animation. Despite wide adoption of physics simulation for the animation of passive phenomena, such as fluids, cloths and rag‐doll characters, commercial applications still resort to kinematics‐based approaches for the animation of actively controlled characters. However, following a renewed interest in the use of physics simulation for interactive character animation, many recent publications demonstrate tremendous improvements in robustness, visual quality and usability. We present a structured review of over two decades of research on physics‐based character animation, as well as point out various open research areas and possible future directions.Item Linear Surface Reconstruction from Discrete Fundamental Forms on Triangle Meshes(The Eurographics Association and Blackwell Publishing Ltd., 2012) Wang, Y.; Liu, B.; Tong, Y.; Holly Rushmeier and Oliver DeussenWe present a linear algorithm to reconstruct the vertex coordinates for a surface mesh given its edge lengths and dihedral angles, unique up to rotation and translation. A local integrability condition for the existence of an immersion of the mesh in 3D Euclidean space is provided, mirroring the fundamental theorem of surfaces in the continuous setting (i.e. Gauss's equation and the Mainardi–Codazzi equations) if we regard edge lengths as the discrete first fundamental form and dihedral angles as the discrete second fundamental form. The resulting sparse linear system to solve for the immersion is derived from the convex optimization of a quadratic energy based on a lift from the immersion in the 3D Euclidean space to the 6D rigid motion space. This discrete representation and linear reconstruction can benefit a wide range of geometry processing tasks such as surface deformation and shape analysis. A rotation‐invariant surface deformation through point and orientation constraints is demonstrated as well.We present a linear algorithm to reconstruct the vertex coordinates for a surface mesh given its edge lengths and dihedral angles, unique up to rotation and translation. A local integrability condition for the existence of an immersion of the mesh in 3D Euclidean space is provided, mirroring the fundamental theorem of surfaces in the continuous setting (i.e., Gauss's equation and the Mainardi‐Codazzi equations) if we regard edge lengths as the discrete first fundamental form and dihedral angles as the discrete second fundamental form.Item Low‐Complexity Intervisibility in Height Fields(The Eurographics Association and Blackwell Publishing Ltd., 2012) Timonen, Ville; Holly Rushmeier and Oliver DeussenGlobal illumination systems require intervisibility information between pairs of points in a scene. This visibility problem is computationally complex, and current interactive implementations for dynamic scenes are limited to crude approximations or small amounts of geometry. We present a novel algorithm to determine intervisibility from all points of dynamic height fields as visibility horizons in discrete azimuthal directions. The algorithm determines accurate visibility along each azimuthal direction in time linear in the number of output visibility horizons. This is achieved by using a novel visibility structure we call the convex hull tree. The key feature of our algorithm is its ability to incrementally update the convex hull tree such that at each receiver point only the visible parts of the height field are traversed. This results in low time complexity; compared to previous work, we achieve two orders of magnitude reduction in the number of algorithm iterations and a speedup of 2.4 to 41 onItem Reviewers(The Eurographics Association and Blackwell Publishing Ltd., 2012) Rushmeier, Holly; Deussen, Oliver; Holly Rushmeier and Oliver DeussenItem Smart Scribbles for Sketch Segmentation(The Eurographics Association and Blackwell Publishing Ltd., 2012) Noris, G.; Sýkora, D.; Shamir, A.; Coros, S.; Whited, B.; Simmons, M.; Hornung, A.; Gross, M.; Sumner, R.; Holly Rushmeier and Oliver DeussenWe present ‘Smart Scribbles’—a new scribble‐based interface for user‐guided segmentation of digital sketchy drawings. In contrast to previous approaches based on simple selection strategies, Smart Scribbles exploits richer geometric and temporal information, resulting in a more intuitive segmentation interface. We introduce a novel energy minimization formulation in which both geometric and temporal information from digital input devices is used to define stroke‐to‐stroke and scribble‐to‐stroke relationships. Although the minimization of this energy is, in general, an NP‐hard problem, we use a simple heuristic that leads to a good approximation and permits an interactive system able to produce accurate labellings even for cluttered sketchy drawings. We demonstrate the power of our technique in several practical scenarios such as sketch editing, as‐rigid‐as‐possible deformation and registration, and on‐the‐fly labelling based on pre‐classified guidelines.We present Smart Scribbles, a new scribble‐based interface for user‐guided segmentation of digital sketchy drawings. In contrast to previous approaches based on simple selection strategies, Smart Scribbles exploits richer geometric and temporal information, resulting in a more intuitive segmentation interface. We introduce a novel energy minimization formulation in which both geometric and temporal information from digital input devices is used to define stroke‐to‐stroke and scribble‐to‐stroke relationships. Although the minimization of this energy is, in general, a NP‐hard problem, we use a simple heuristic that leads to a good approximation and permits an interactive system able to produce accurate labelings even for cluttered sketchy drawings. We demonstrate the power of our technique in several practical scenarios such as sketch editing, as‐rigid‐as‐possible deformation and registration, and on‐the‐fly labeling based on pre‐classified guidelines.Item State of the Art Report on Video‐Based Graphics and Video Visualization(The Eurographics Association and Blackwell Publishing Ltd., 2012) Borgo, R.; Chen, M.; Daubney, B.; Grundy, E.; Heidemann, G.; Höferlin, B.; Höferlin, M.; Leitte, H.; Weiskopf, D.; Xie, X.; Holly Rushmeier and Oliver DeussenIn recent years, a collection of new techniques which deal with video as input data, emerged in computer graphics and visualization. In this survey, we report the state of the art in video‐based graphics and video visualization. We provide a review of techniques for making photo‐realistic or artistic computer‐generated imagery from videos, as well as methods for creating summary and/or abstract visual representations to reveal important features and events in videos. We provide a new taxonomy to categorize the concepts and techniques in this newly emerged body of knowledge. To support this review, we also give a concise overview of the major advances in automated video analysis, as some techniques in this field (e.g. feature extraction, detection, tracking and so on) have been featured in video‐based modelling and rendering pipelines for graphics and visualization.In recent years, a collection of new techniques which deal with video as input data, emerged in computer graphics and visualization. In this survey, we report the state of the art in video‐based graphics and video visualization. We provide a review of techniques for making photo‐realistic or artistic computer‐generated imagery from videos, as well as methods for creating summary and/or abstract visual representations to reveal important features and events in videos. We provide a new taxonomy to categorize the concepts and techniques in this newly‐emerged body of knowledge. To support this review, we also give a concise overview of the major advances in automated video analysis, as some techniques in this field (e.g., feature extraction, detection, tracking, and so on) have been featured in video‐based modeling and rendering pipelines for graphics and visualization.Item Temporal Blending for Adaptive SPH(The Eurographics Association and Blackwell Publishing Ltd., 2012) Orthmann, Jens; Kolb, Andreas; Holly Rushmeier and Oliver DeussenIn this paper, we introduce a fast and consistent smoothed particle hydrodynamics (SPH) technique which is suitable for convection–diffusion simulations of incompressible fluids. We apply our temporal blending technique to reduce the number of particles in the simulation while smoothly changing quantity fields. Our approach greatly reduces the error introduced in the pressure term when changing particle configurations. Compared to other methods, this enables larger integration time‐steps in the transition phase. Our implementation is fully GPU‐based to take advantage of the parallel nature of particle simulations.In this paper we introduce a fast and consistent Smoothed Particle Hydrodynamics (SPH) technique which is suitable for convection‐diffusion simulations of incompressible fluids. We apply our temporal blending technique to reduce the number of particles in the simulation while smoothly changing quantity fields. Our approach greatly reduces the error introduced in the pressure term when changing particle configurations. Compared to other methods, this enables larger integration time‐steps in the transition phase. Our implementation is fully GPU‐based in order to take advantage of the parallel nature of particle simulations.Item Temporal Coherence Methods in Real‐Time Rendering(The Eurographics Association and Blackwell Publishing Ltd., 2012) Scherzer, Daniel; Yang, Lei; Mattausch, Oliver; Nehab, Diego; Sander, Pedro V.; Wimmer, Michael; Eisemann, Elmar; Holly Rushmeier and Oliver DeussenNowadays, there is a strong trend towards rendering to higher‐resolution displays and at high frame rates. This development aims at delivering more detail and better accuracy, but it also comes at a significant cost. Although graphics cards continue to evolve with an ever‐increasing amount of computational power, the speed gain is easily counteracted by increasingly complex and sophisticated shading computations. For real‐time applications, the direct consequence is that image resolution and temporal resolution are often the first candidates to bow to the performance constraints (e.g. although full HD is possible, PS3 and XBox often render at lower resolutions).In order to achieve high‐quality rendering at a lower cost, one can exploit temporal coherence (TC). The underlying observation is that a higher resolution and frame rate do not necessarily imply a much higher workload, but a larger amount of redundancy and a higher potential for amortizing rendering over several frames. In this survey, we investigate methods that make use of this principle and provide practical and theoretical advice on how to exploit TC for performance optimization. These methods not only allow incorporating more computationally intensive shading effects into many existing applications, but also offer exciting opportunities for extending high‐end graphics applications to lower‐spec consumer‐level hardware. To this end, we first introduce the notion and main concepts of TC, including an overview of historical methods. We then describe a general approach, image‐space reprojection, with several implementation algorithms that facilitate reusing shading information across adjacent frames. We also discuss data‐reuse quality and performance related to reprojection techniques. Finally, in the second half of this survey, we demonstrate various applications that exploit TC in real‐time rendering.In order to achieve high‐quality rendering at a lower cost, one can exploit temporal coherence (TC). The underlying observation is that a higher resolution and frame rate do not necessarily imply a much higher workload, but a larger amount of redundancy and a higher potential for amortizing rendering over several frames. In this survey, we investigate methods that make use of this principle and provide practical and theoretical advice on how to exploit TC for performance optimization. These methods not only allow incorporating more computationally intensive shading effects intomany existing applications, but also offer exciting opportunities for extending high‐end graphics applications to lower‐spec consumer‐level hardware.Item Visualization for the Physical Sciences(The Eurographics Association and Blackwell Publishing Ltd., 2012) Lipşa, Dan R.; Laramee, Robert S.; Cox, Simon J.; Roberts, Jonathan C.; Walker, Rick; Borkin, Michelle A.; Pfister, Hanspeter; Holly Rushmeier and Oliver DeussenClose collaboration with other scientific fields is an important goal for the visualization community. Yet engaging in a scientific collaboration can be challenging. The physical sciences, namely astronomy, chemistry, earth sciences and physics, exhibit an extensive range of research directions, providing exciting challenges for visualization scientists and creating ample possibilities for collaboration. We present the first survey of its kind that provides a comprehensive view of existing work on visualization for the physical sciences. We introduce novel classification schemes based on application area, data dimensionality and main challenge addressed, and apply these classifications to each contribution from the literature. Our survey helps in understanding the status of current research and serves as a useful starting point for those interested in visualization for the physical sciences.Close collaboration with other scientific fields is an important goal for the visualization community. Yet engaging in a scientific collaboration can be challenging. The physical sciences, namely astronomy, chemistry, earth sciences and physics, exhibit an extensive range of research directions, providing exciting challenges for visualization scientists and creating ample possibilities for collaboration. We present the first survey of its kind that provides a comprehensive view of existing work on visualization for the physical sciences.