EG UK Theory and Practice of Computer Graphics
Permanent URI for this community
Browse
Browsing EG UK Theory and Practice of Computer Graphics by Subject "and texture"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Acquisition, Representation and Rendering of Real-World Models using Polynomial Texture Maps in 3D(The Eurographics Association, 2013) Vassallo, Elaine; Spina, Sandro; Debattista, Kurt; Silvester Czanner and Wen TangThe ability to represent real-world objects digitally in a realistic manner is an indispensable tool for many applications. This paper proposes a method for acquiring, processing, representing, and rendering these digital representations. Acquisition can be divided into two processes: acquiring the 3D geometry of the object, and obtaining the texture and reflectance behaviour of the object. Our work explores the possibility of using Microsoft's Kinect sensor to acquire the 3D geometry, by registration of data captured from different viewpoints. The Kinect sensor itself is used to acquire texture and reflectance information which is represented using multiple Polynomial Texture Maps. We present processing pipelines for both geometry and texture, and finally our work examines how the acquired and processed geometry, texture, and reflectance behaviour information can be mapped together in 3D, allowing the user to view the object from different viewpoints while being able to interactively change light direction. Varying light direction uncovers details of the object which would not have been possible to observe using a single, fixed, light direction. This is useful in many scenarios, amongst which is the examination of cultural heritage artifacts with surface variations.Item A Compact Tucker-Based Factorization Model for Heterogeneous Subsurface Scattering(The Eurographics Association, 2013) Kurt, Murat; Öztürk, Aydin; Peers, Pieter; Silvester Czanner and Wen TangThis paper presents a novel compact factored subsurface scattering representation for optically thick, heterogeneous translucent materials. Our subsurface scattering representation is a combination of Tucker-based factorization and a linear regression method. We first apply Tucker factorization on the intensity profiles of the heterogeneous subsurface scattering responses. Next, we fit a polynomial model for characterizing the differences between the different color channels with a linear regression procedure. We show that our method achieves good compression while maintaining visual fidelity. We validate our heterogeneous subsurface scattering representation on various real-world heterogeneous translucent materials, geometries and lighting conditions.Item Interactive Projective Texturing for Non-Photorealistic Shading of Technical 3D Models(The Eurographics Association, 2013) Lux, Roland; Trapp, Matthias; Semmo, Amir; Döllner, Jürgen; Silvester Czanner and Wen TangThis paper presents a novel interactive rendering technique for creating and editing shadings for man-made objects in technical 3D visualizations. In contrast to shading approaches that use intensities computed based on surface normals (e.g., Phong, Gooch, Toon shading), the presented approach uses one-dimensional gradient textures, which can be parametrized and interactively manipulated based on per-object bounding volume approximations. The fully hardware-accelerated rendering technique is based on projective texture mapping and customizable intensity transfer functions. A provided performance evaluation shows comparable results to traditional normal-based shading approaches. The work also introduce simple direct-manipulation metaphors that enables interactive user control of the gradient texture alignment and intensity transfer functions.Item Measuring Realism in Hair Rendering(The Eurographics Association, 2013) Ramesh, Girish; Turner, Martin J.; Silvester Czanner and Wen TangVisualisation of hair is an extremely complex problem within the field of Computer Graphics. Over the last 10 years, huge strides have been made in the area of physically-based hair rendering, giving rise to many applications in various fields other than the graphics industry. Given the number of models for hair rendering, there is no well defined evaluation process to measure the realism in the hair models in use today. For this work-in-progress paper, we propose an evaluation process not only to evaluate the realism in hair rendering models, but also examine the various effects that contribute to its realistic perception. This builds an index of realism based on experiments with computer generated models, and then proposes comparing the results with values obtained from computational tomography, optical imaging and goniophotometer readings.Item Natural Phenomena as Metaphors for Visualization of Trend Data in Interactive Software Maps(The Eurographics Association, 2015) Würfel, Hannes; Trapp, Matthias; Limberger, Daniel; Döllner, Jürgen; Rita Borgo and Cagatay TurkaySoftware maps are a commonly used tool for code quality monitoring in software-development projects and decision making processes. While providing an important visualization technique for the hierarchical system structure of a single software revision, they lack capabilities with respect to the visualization of changes over multiple revisions. This paper presents a novel technique for visualizing the evolution of the software system structure based on software metric trends. These trend maps extend software maps by using real-time rendering techniques for natural phenomena yielding additional visual variables that can be effectively used for the communication of changes. Therefore, trend data is automatically computed by hierarchically aggregating software metrics. We demonstrate and discuss the presented technique using two real world data sets of complex software systems.Item Resolution Estimation for Shadow Mapping(The Eurographics Association, 2013) Ferko, Michal; Silvester Czanner and Wen TangWe present an approach to efficiently reduce shadow map resolution while retaining high quality hard shadows. In the first step, we generate a list of sample points that are seen from the camera and then project these into light space, much like Alias-free Shadow Maps. In the next step, we analyze the list of sample points on the GPU to construct a tight light frustum for shadow rendering. After the light frustum is computed, we calculate for each sample the actual coverage in the final shadow map to estimate how large a shadow map pixel should be. From this number, we derive the lowest possible resolution to use in the shadow map while retaining nearly alias-free shadows. Our algorithm is built for a deferred renderer.