28-Issue 1
Permanent URI for this collection
Browse
Browsing 28-Issue 1 by Title
Now showing 1 - 15 of 15
Results Per Page
Sort Options
Item BTF-CIELab: A Perceptual Difference Measure for Quality Assessment and Compression of BTFs(The Eurographics Association and Blackwell Publishing Ltd, 2009) Guthe, Michael; Mueller, Gero; Schneider, Martin; Klein, ReinhardDriven by the advances in lossy compression of bidirectional texture functions (BTFs), there is a growing need for reliable methods to numerically measure the visual quality of the various compressed representations. Based on the CIE "E00 colour difference equation and concepts of its spatio-temporal extension ST-CIELab for video quality assessment, this paper presents a numerical quality measure for compressed BTF representations. By analysing the BTF in its full six-dimensional (6D) space, light and view transition effects are integrated into the measure. In addition to the compressed representation, the method only requires the source BTF images as input and thus aids the objective evaluation of different compression techniques by means of a simple numerical comparison. By separating the spatial and angular components of the difference measure and linearizing each of them, the measure can be incorporated into any linear or multi-linear compression technique. Using a per-colour-channel principal component analysis (PCA), compression rates of about 500:1 can be achieved at excellent visual quality.Item CGForum 2009 Cover Image(The Eurographics Association and Blackwell Publishing Ltd, 2009) Spencer, Ben; Jones, Mark W.Item A Comparison of Tabular PDF Inversion Methods(The Eurographics Association and Blackwell Publishing Ltd, 2009) Cline, D.; Razdan, A.; Wonka, P.The most common form of tabular inversion used in computer graphics is to compute the cumulative distribution table of a probability distribution (PDF) and then search within it to transform points, using an O(logA n) binary search. Besides the standard inversion method, however, several other discrete inversion algorithms exist that can perform the same transformation inO(1) time per point. In this paper, we examine the performance of three of these alternate methods, two of which are new.Item Compression of Human Motion Capture Data Using Motion Pattern Indexing(The Eurographics Association and Blackwell Publishing Ltd, 2009) Gu, Qin; Peng, Jingliang; Deng, ZhigangIn this work, a novel scheme is proposed to compress human motion capture data based on hierarchical structure construction and motion pattern indexing. For a given sequence of 3D motion capture data of human body, the 3D markers are first organized into a hierarchy where each node corresponds to a meaningful part of the human body. Then, the motion sequence corresponding to each body part is coded separately. Based on the observation that there is a high degree of spatial and temporal correlation among the 3D marker positions, we strive to identify motion patterns that form a database for each meaningful body part. Thereafter, a sequence of motion capture data can be efficiently represented as a series of motion pattern indices. As a result, higher compression ratio has been achieved when compared with the prior art, especially for long sequences of motion capture data with repetitive motion styles. Another distinction of this work is that it provides means for flexible and intuitive global and local distortion controls.Item Editorial(The Eurographics Association and Blackwell Publishing Ltd, 2009) Scopigno, Roberto; Groeller, EduardItem Efficient Geometry Compression for GPU-based Decoding in Realtime Terrain Rendering(The Eurographics Association and Blackwell Publishing Ltd, 2009) Dick, Christian; Schneider, Jens; Westermann, RuedigerWe present a geometry compression scheme for restricted quadtree meshes and use this scheme for the compression of adaptively triangulated digital elevation models (DEMs). A compression factor of 8-9 is achieved by employing a generalized strip representation of quadtree meshes to incrementally encode vertex positions. In combination with adaptive error-controlled triangulation, this allows us to significantly reduce bandwidth requirements in the rendering of large DEMs that have to be paged from disk. The compression scheme is specifically tailored for GPU-based decoding, since it minimizes dependent memory access operations. We can thus trade CPU operations and CPU-GPU data transfer for GPU processing, resulting in twice faster streaming of DEMs from main memory into GPU memory. A novel storage format for decoded DEMs on the GPU facilitates a sustained rendering throughput of about 300 million triangles per second. Due to these properties, the proposed scheme enables scalable rendering with respect to the display resolution independent of the data size. For a maximum screen-space error below 1 pixel it achieves frame rates of over 100A fps, even on high-resolution displays. We validate the efficiency of the proposed method by presenting experimental results on scanned elevation models of several hundred gigabytes.Item Exposure Fusion: A Simple and Practical Alternative to High Dynamic Range Photography(The Eurographics Association and Blackwell Publishing Ltd, 2009) Mertens, T.; Kautz, J.; Van Reeth, F.We propose a technique for fusing a bracketed exposure sequence into a high quality image, without converting to High dynamic range (HDR) first. Skipping the physically based HDR assembly step simplifies the acquisition pipeline. This avoids camera response curve calibration and is computationally efficient. It also allows for including flash images in the sequence. Our technique blends multiple exposures, guided by simple quality measures like saturation and contrast. This is done in a multiresolution fashion to account for the brightness variation in the sequence. The resulting image quality is comparable to existing tone mapping operators.Item Fast Ray Tracing of Arbitrary Implicit Surfaces with Interval and Affine Arithmetic(The Eurographics Association and Blackwell Publishing Ltd, 2009) Knoll, A.; Hijazi, Y.; Kensler, A.; Schott, M.; Hansen, C.; Hagen, H.Existing techniques for rendering arbitrary-form implicit surfaces are limited, either in performance, correctness or flexibility. Ray tracing algorithms employing interval arithmetic (IA) or affine arithmetic (AA) for root-funding are robust and general in the class of surfaces they support, but traditionally slow. Nonetheless, implemented efficiently using a stack-driven iterative algorithm and SIMD vector instructions, these methods can achieve interactive performance for common algebraic surfaces on the CPU. A similar algorithm can also be implemented stacklessly, allowing for efficient ray tracing on the GPU. This paper presents these algorithms, as well as an inclusion-preserving reduced affine arithmetic (RAA) for faster ray-surface intersection. Shader metaprogramming allows for immediate and automatic generation of symbolic expressions and their interval or affine extensions. Moreover, we are able to render even complex forms robustly, in real-time at high resolution.Item Out-of-Core and Dynamic Programming for Data Distribution on a Volume Visualization Cluster(The Eurographics Association and Blackwell Publishing Ltd, 2009) Frank, S.; Kaufman, A.Ray directed volume-rendering algorithms are well suited for parallel implementation in a distributed cluster environment. For distributed ray casting, the scene must be partitioned between nodes for good load balancing, and a strict view-dependent priority order is required for image composition. In this paper, we define the load balanced network distribution (LBND) problem and map it to the NP-complete precedence constrained job-shop scheduling problem. We introduce a kd-tree solution and a dynamic programming solution. To process a massive data set, either a parallel or an out-of-core approach is required. Parallel preprocessing is performed by render nodes on data, which are allocated using a static data structure. Volumetric data sets often contain a large portion of voxels that will never be rendered, or empty space. Parallel preprocessing fails to take advantage of this. OurA slab-projection slice, introduced in this paper, tracks empty space across consecutive slices of data to reduce the amount of data distributed and rendered. It is used to facilitate out-of-core bricking and kd-tree partitioning. Load balancing using each of our approaches is compared with traditional methods using several segmented regions of the Visible Korean data set.Item Partial 3D Shape Retrieval by Reeb Pattern Unfolding(The Eurographics Association and Blackwell Publishing Ltd, 2009) Tierny, Julien; Vandeborre, Jean-Philippe; Daoudi, MohamedItem A Psychophysical Evaluation of Inverse Tone Mapping Techniques(The Eurographics Association and Blackwell Publishing Ltd, 2009) Banterle, Francesco; Ledda, Patrick; Debattista, Kurt; Bloj, Marina; Artusi, Alessandro; Chalmers, AlanIn recent years inverse tone mapping techniques have been proposed for enhancing low-dynamic range (LDR) content for a high-dynamic range (HDR) experience on HDR displays, and for image based lighting. In this paper, we present a psychophysical study to evaluate the performance of inverse (reverse) tone mapping algorithms. Some of these techniques are computationally expensive because they need to resolve quantization problems that can occur when expanding an LDR image. Even if they can be implemented efficiently on hardware, the computational cost can still be high. An alternative is to utilize less complex operators; although these may suffer in terms of accuracy. Our study investigates, firstly, if a high level of complexity is needed for inverse tone mapping and, secondly, if a correlation exists between image content and quality. Two main applications have been considered: visualization on an HDR monitor and image-based lighting.Item Resolution Independent NPR-Style 3D Line Textures(The Eurographics Association and Blackwell Publishing Ltd, 2009) Potter, Kristin; Gooch, Amy; Gooch, Bruce; Willemsen, Peter; Kniss, Joe; Riesenfeld, Richard; Shirley, PeterThis work introduces a technique for interactive walk-throughs of non-photorealistically rendered (NPR) scenes using three-dimensional (3D) line primitives to define architectural features of the model, as well as indicate textural qualities. Line primitives are not typically used in this manner in favour of texture mapping techniques which can encapsulate a great deal of information in a single texture map, and take advantage of GPU optimizations for accelerated rendering. However, texture mapped images may not maintain the visual quality or aesthetic appeal that is possible when using 3D lines to simulate NPR scenes such as hand-drawn illustrations or architectural renderings. In addition, line textures can be modified interactively, for instance changing the sketchy quality of the lines. The technique introduced here extracts feature edges from a model, and using these edges, generates a reduced set of line textures which indicate material properties while maintaining interactive frame rates. A clipping algorithm is presented to enable 3D lines to reside only in the interior of the 3D model without exposing the underlying triangulated mesh. The resulting system produces interactive illustrations with high visual quality that are free from animation artifacts.Item Shape Context Preserving Deformation of 2D Anatomical Illustrations(The Eurographics Association and Blackwell Publishing Ltd, 2009) Chen, Wei; Liang, Xiao; Maciejewski, Ross; Ebert, David S.In this paper, we present a novel two-dimensional (2D) shape context preserving image manipulation approach which constructs and manipulates a 2D mesh with a new differential mesh editing algorithm. We introduce a novel shape context descriptor and integrate it into the deformation framework, facilitating shape-preserving deformation for 2D anatomical illustrations. Our new scheme utilizes an analogy based shape transfer technique in order to learn shape styles from reference images. Experimental results show that visually plausible deformation can be quickly generated from an existing example at interactive frame rates. An experienced artist has evaluated our approach and his feedback is quite encouraging.Item Tangential Distance Fields for Mesh Silhouette Problems(The Eurographics Association and Blackwell Publishing Ltd, 2009) Olson, M.; Zhang, H.We consider a tangent-space representation of surfaces that maps each point on a surface to the tangent plane of the surface at that point. Such representations are known to facilitate the solution of several visibility problems, in particular, those involving silhouette analysis. In this paper, we introduce a novel class of distance fields for a given surface defined by its tangent planes. At each point in space, we assign a scalar value which is a weighted sum of distances to these tangent planes. We call the resulting scalar field a tangential distance field (TDF). When applied to triangle mesh models, the tangent planes become supporting planes of the mesh triangles. The weighting scheme used to construct a TDF for a given mesh and the way the TDF is utilized can be closely tailored to a specific application. At the same time, the TDFs are continuous, lending themselves to standard optimization techniques such as greedy local search, thus leading to efficient algorithms. In this paper, we use four applications to illustrate the benefit of using TDFs: multi-origin silhouette extraction in Hough space, silhouette-based view point selection, camera path planning and light source placement.Item Visual-Quality Optimizing Super Resolution(The Eurographics Association and Blackwell Publishing Ltd, 2009) Liu, F.; Wang, J.; Zhu, S.; Gleicher, M.; Gong, Y.In this paper, we propose a robust image super-resolution (SR) algorithm that aims to maximize the overall visual quality of SR results. We consider a good SR algorithm to be fidelity preserving, image detail enhancing and smooth. Accordingly, we define perception-based measures for these visual qualities. Based on these quality measures, we formulate image SR as an optimization problem aiming to maximize the overall quality. Since the quality measures are quadratic, the optimization can be solved efficiently. Experiments on a large image set and subjective user study demonstrate the effectiveness of the perception-based quality measures and the robustness and efficiency of the presented method.