Volume 17 (1998)
Permanent URI for this community
Browse
Browsing Volume 17 (1998) by Issue Date
Now showing 1 - 20 of 89
Results Per Page
Sort Options
Item Eurographics '98(Blackwell Publishers Ltd and the Eurographics Association, 1998)Item Event Reports(Blackwell Publishers Ltd and the Eurographics Association, 1998)Eurographics â 98 Conference, page 301Eurographics â 98 Awards, page 303EG UK â 98 Conference, page 304Cover Competition Winners, page 306Item Siggraph/Eurographics Workshop on Graphics Hardware(Blackwell Publishers Ltd and the Eurographics Association, 1998) Schneider, Bengt-OlafItem Conservative Visibility and Strong Occlusion for Viewspace Partitioning of Densely Occluded Scenes(Blackwell Publishers Ltd and the Eurographics Association, 1998) Cohen-Or, Daniel; Fibich, Gadi; Halperin, Dan; Zadicario, EyalComputing the visibility of out-door scenes is often much harder than of in-door scenes. A typical urban scene, for example, is densely occluded, and it is effective to precompute its visibility space, since from a given point only a small fraction of the scene is visible. The difficulty is that although the majority of objects are hidden, some parts might be visible at a distance in an arbitrary location, and it is not clear how to detect them quickly. In this paper we present a method to partition the viewspace into cells containing a conservative superset of the visible objects. For a given cell the method tests the visibility of all the objects in the scene. For each object it searches for a strong occluder which guarantees that the object is not visible from any point within the cell. We show analytically that in a densely occluded scene, the vast majority of objects are strongly occluded, and the overhead of using conservative visibility (rather than visibility) is small. These results are further supported by our experimental results. We also analyze the cost of the method and discuss its effectiveness.Item Eurographics '98(Blackwell Publishers Ltd and the Eurographics Association, 1998)Item Dithered Color Quantization(Blackwell Publishers Ltd and the Eurographics Association, 1998) Buhmann, J. M.; Fellner, Dieter W.; Held, M.; Ketterer, J.; Puzicha, J.Image quantization and digital halftoning are fundamental problems in computer graphics, which arise when displaying high-color images on non-truecolor devices. Both steps are generally performed sequentially and, in most cases, independent of each other. Color quantization with a pixel-wise defined distortion measure and the dithering process with its local neighborhood optimize different quality criteria or, frequently, follow a heuristic without reference to any quality measure.In this paper we propose a new method to simultaneously quantize and dither color images. The method is based on a rigorous cost-function approach which optimizes a quality criterion derived from a generic model of human perception. A highly efficient algorithm for optimization based on a multiscale method is developed for the dithered color quantization cost function. The quality criterion and the optimization algorithms are evaluated on a representative set of artificial and real-world images as well as on a collection of icons. A significant image quality improvement is observed compared to standard color reduction approaches.Item Progressive Iso-Surface Extraction from Hierarchical 3D Meshes(Blackwell Publishers Ltd and the Eurographics Association, 1998) Grosso, Roberto; Ertl, ThomasA multiresolution data decomposition offers a fundamental framework supporting compression, progressive transmission, and level-of-detail (LOD) control for large two or three dimensional data sets discretized on complex meshes. In this paper we extend a previously presented algorithm for 3D mesh reduction for volume data based on multilevel finite element approximations in two ways. First, we present efficient data structures which allow to incrementally construct approximations of the volume data at lower or higher resolutions at interactive rates. An abstract description of the mesh hierarchy in terms of a coarse base mesh and a set of integer records offers a high compression potential which is essential for an efficient storage and a progressive network transmission. Based on this mesh hierarchy we then develop a new progressive iso-surface extraction algorithm. For a given iso-value, the corresponding iso-surface can be computed at different levels of resolution. Changing to a higher or coarser resolution will update the surface only in those regions where the volume data is being refined or coarsened. Our approach allows to interactively visualize very large scalar fields like medical data sets, whereas the conventional algorithms would have required at least an order of magnitude more resources.Item 1998 Annual Index(Blackwell Publishers Ltd and the Eurographics Association, 1998)Item A Light Hierarchy for Fast Rendering of Scenes with Many Lights(Blackwell Publishers Ltd and the Eurographics Association, 1998) Paquette, Eric; Poulin, Pierre; Drettakis, GeorgeWe introduce a new data structure in the form of a light hierarchy for efficiently ray-tracing scenes with many light sources. An octree is constructed with the point light sources in a scene. Each node represents all the light sources it contains by means of a virtual light source. We determine bounds on the error committed with this approximation to shade a point, both for the cases of diffuse and specular reflections. These bounds are then used to guide a hierarchical shading algorithm. If the current level of the light hierarchy provides shading of sufficient quality, the approximation is used, thus avoiding the cost of shading for all the light sources contained below this level. Otherwise the descent into the light hierarchy continues.Our approach has been implemented for scenes without occlusion. The results show important acceleration compared to standard ray-tracing (up to 90 times faster) and an important improvement compared to Wardâ s adaptive shadow testing.Item Programming Paradigms in an Object-Oriented Multimedia Standard(Blackwell Publishers Ltd and the Eurographics Association, 1998) Duke, D. J.; Herman, I.Of the various programming paradigms in use today, object-orientation is probably the most successful in terms of industrial take-up and application, particularly in the field of multimedia. It is therefore unsurprising that this technology has been adopted by ISO/IEC JTC1/SC24 as the foundation for a forthcoming International Standard for Multimedia, called PREMO. Two important design aims of PREMO are that it be distributable, and that it provides a set of media-related services that can be extended in a disciplined way to support the needs of future applications and problem domains. While key aspects of the object-oriented paradigm provide a sound technical basis for achieving these aims, the need to balance extensibility and a high-level programming interface against the realities of efficiency and ease of implementation in a distributed setting meant that the task of synthesising a Standard from existing practice was non-trivial. Indeed, in order to meet the design aims of PREMO is was found necessary to augment the basic object infrastructure with facilities and ideas drawn from other programming paradigms, in particular concepts from constraint management and data flow. This paper describes the important trade-offs that have affected the development of PREMO and explains how these are addressed through the use of specific programming paradigms.Item Multiresolution Isosurface Extraction with Adaptive Skeleton Climbing(Blackwell Publishers Ltd and the Eurographics Association, 1998) Poston, Tim; Wong, Tien-Tsin; Heng, Pheng-AnnAn isosurface extraction algorithm which can directly generate multiresolution isosurfaces from volume data is introduced. It generates low resolution isosurfaces, with 4 to 25 times fewer triangles than that generated by marching cubes algorithm, in comparable running times. By climbing from vertices (0-skeleton) to edges (1-skeleton) to faces (2-skeleton), the algorithm constructs boxes which adapt to the geometry of the true isosurface. Unlike previous adaptive marching cubes algorithms, the algorithm does not suffer from the gap-filling problem. Although the triangles in the meshes may not be optimally reduced, it is much faster than postprocessing triangle reduction algorithms. Hence the coarse meshes it produces can be used as the initial starts for the mesh optimization, if mesh optimality is the main concern.Item Interactive Construction and Animation of Layered Elastically Deformable Characters(Blackwell Publishers Ltd and the Eurographics Association, 1998) Turner, Russell; Gobbetti, EnricoAn interactive system is described for creating and animating deformable 3D characters. By using a hybrid layered model of kinematic and physics-based components together with an immersive 3D direct manipulation interface, it is possible to quickly construct characters that deform naturally when animated and whose behavior can be controlled interactively using intuitive parameters. In this layered construction technique, called the elastic surface layer model, a simulated elastically deformable skin surface is wrapped around a kinematic articulated figure. Unlike previous layered models, the skin is free to slide along the underlying surface layers constrained by geometric constraints which push the surface out and spring forces which pull the surface in to the underlying layers. By tuning the parameters of the physics-based model, a variety of surface shapes and behaviors can be obtained such as more realistic-looking skin deformation at the joints, skin sliding over muscles, and dynamic effects such as squash-and-stretch and follow-through. Since the elastic model derives all of its input forces from the underlying articulated figure, the animator may specify all of the physical properties of the character once, during the initial character design process, after which a complete animation sequence can be created using a traditional skeleton animation technique. Character construction and animation are done using a 3D user interface based on two-handed manipulation registered with head-tracked stereo viewing. In our configuration, a six degree-of-freedom head-tracker and CrystalEyes shutter glasses are used to display stereo images on a workstation monitor that dynamically follow the user head motion. 3D virtual objects can be made to appear at a fixed location in physical space which the user may view from different angles by moving his head. To construct 3D animated characters, the user interacts with the simulated environment using both hands simultaneously: the left hand, controlling a Spaceball, is used for 3D navigation and object movement, while the right hand, holding a 3D mouse, is used to manipulate through a virtual tool metaphor the objects appearing in front of the screen. Hand-eye coordination is made possible by registering virtual space to physical space, allowing a variety of complex 3D tasks necessary for constructing 3D animated characters to be performed more easily and more rapidly than is possible using traditional interactive techniques.Item Editorial(Blackwell Publishers Ltd and the Eurographics Association, 1998) Coquillart, Sabine; Seidel, Hans-PeterItem The Priority Face Determination Tree for Hidden Surface Removal(Blackwell Publishers Ltd and the Eurographics Association, 1998) James, A.; Day, A. M.Many virtual environments are built from a set of polygons that form the basis of objects in the scene. Using priority-list algorithms, the sequence in which these polygons are drawn is dependent upon the location of an observer; the polygons must be ordered correctly before a realistic image can be displayed. It is necessary for a scene to be drawn correctly in real time from all locations before the observer can move interactively around the scene with complete freedom.The binary-space partitioning (BSP) tree developed by Fuchs, Kedem and Naylor in 1980 stores the view independent priority of a set of polygons which can be used to obtain the correct order for any given view-point. However, the number of polygons grows significantly due to the BSP splitting stage, increasing the number of nodes in the tree. This affects linearly the number of tests necessary to traverse the tree to obtain the priority of the set of polygons.The algorithm presented here is built using its associated BSP tree, but attempts to reduce the number of tests to, log4/3n, at the cost of a tree of size of O(N1.5log4/3n?1), where n is the initial number of polygons in the scene, and N the resulting number after BSP splitting. To achieve the increase in run-time efficiency, a height plane is used to restrict the view point of the observer to a fixed height, but the key to the efficiency of the algorithm is in the use of polygonal dependencies. In the scene; if we know our location relative to the front or back of a polygon, then our position relative to one-quarter of the remaining polygons, in the expected worst-case, can be determined.Item Calendar of Events(Blackwell Publishers Ltd and the Eurographics Association, 1998)Item Sixth Eurographics Workshop on Programming Paradigms in Graphics (WPPG97)(Blackwell Publishers Ltd and the Eurographics Association, 1998) Arbab, Farhad; Slusallek, PhilippItem Fast Feature-Based Metamorphosis and Operator Design(Blackwell Publishers Ltd and the Eurographics Association, 1998) Lee, Tong-Yee; Lin, Young-Ching; Sun, Y.N.; Lin, LeeweenMetamorphosis is a powerful visual technique, for producing interesting transition between two images or volume data. Image or volume metamorphosis using simple features provides flexible and easy control of visual effect. The feature-based image warping proposed by Beier and Neely is a brute-force approach. In this paper, first, we propose optimization methods to reduce their warping time without noticeable loss of image quality. Second, we extend our methods to 3D volume data and propose several interesting warping operators allowing global and local metamorphosis of volume data.Item Maximum Intensity Projection Using Splatting in Sheared Object Space(Blackwell Publishers Ltd and the Eurographics Association, 1998) Cai, Wenli; Sakas, GeorgiosIn this paper we present a new Maximum Intensity Projection (MIP) algorithm which was implemented employing splatting in a shear-warp context. This algorithm renders a MIP image by first splatting each voxel on two intermediate spaces called "worksheet" and "shear image". Then, the maximum value is evaluated between worksheet and shear image. Finally, shear image is warped on the screen to generate the result image. Different footprints implementing different quality modes are discussed. In addition, we introduced a line encoded indexing speed-up method to obtain interactive speed. This algorithm allows for a quantitative, predictable trade-off between interactivity and image quality.Item Color Fidelity in Computer Graphics: a Survey(Blackwell Publishers Ltd and the Eurographics Association, 1998) Rougeron, Gilles; Peroche, BernardThe purpose of this paper is to make a state of the art for color fidelity in computer graphics. Color fidelity includes three steps. The first one is the spectral rendering phase which attributes a spectrum to each pixel of a picture. During the second step, a spectral data is transformed into a set of tristimulus values in the XYZ color space. The purpose of the third step, called Color Reproduction Function, is to determine the RGB values displayable on the screen, in such a way that subjective fidelity is reached. We especially detail the two last steps of the color fidelity process; we also point out the work still remaining to be done in this field and we propose some research ways.Item Perception Based Color Image Difference(Blackwell Publishers Ltd and the Eurographics Association, 1998) Neumann, Laszlo; Matkovic, Kresimir; Purgathofer, WernerA good image metric is often needed in digital image synthesis. It can be used to check the convergence behavior in progressive methods, to compare images rendered using various rendering methods etc. Since images are rendered to be observed by humans, an image metric should correspond to human perception as well. We propose here a new algorithm which operates in the original image space. There is no need for Fourier or wavelet transforms. Furthermore, the new metric is view distance dependent. The new method uses the contrast sensitivity function. The main idea is to place a number of various rectangles in images, and to compute the CIE LUV average color difference between corresponding rectangles. Errors are then weighted according to the rectangle size and the contrast sensitivity function.