EG2014
Permanent URI for this community
Browse
Browsing EG2014 by Title
Now showing 1 - 20 of 106
Results Per Page
Sort Options
Item 3D Timeline: Reverse Engineering of a Part-based Provenance from Consecutive 3D Models(The Eurographics Association and John Wiley and Sons Ltd., 2014) Dobos, Jozef; Mitra, Niloy J.; Steed, Anthony; B. Levy and J. KautzWe present a novel tool for reverse engineering of modeling histories from consecutive 3D files based on a timeline abstraction. Although a timeline interface is commonly used in 3D modeling packages for animations, it has not been used on geometry manipulation before. Unlike previous visualization methods that require instrumentation of editing software, our approach does not rely on pre-recorded editing instructions. Instead, each stand-alone 3D file is treated as a keyframe of a construction flow from which the editing provenance is reverse engineered. We evaluate this tool on six complex 3D sequences created in a variety of modeling tools by different professional artists and conclude that it provides useful means of visualizing and understanding the editing history. A comparative user study suggests the tool is well suited for this purpose.Item 3D Video: from Capture to Diffusion(The Eurographics Association, 2014) Rémion, Yannick; Lucas, Laurent; Loscos, Céline; Nicolas Holzschuch and Karol MyszkowskiWhile 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century. This tutorial aims at introducing theoretical, technological and practical concepts associated to multiview systems. It covers acquisition, manipulation, and rendering. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide the necessary elements for understanding the underlying computer-based science of these technologies.Item 4D Video Textures for Interactive Character Appearance(The Eurographics Association and John Wiley and Sons Ltd., 2014) Casas, Dan; Volino, Marco; Collomosse, John; Hilton, Adrian; B. Levy and J. Kautzanimation from a database of 4D actor performance captured in a multiple camera studio. 4D performance capture reconstructs dynamic shape and appearance over time but is limited to free-viewpoint video replay of the same motion. Interactive animation from 4D performance capture has so far been limited to surface shape only. 4DVT is the final piece in the puzzle enabling video-realistic interactive animation through two contributions: a layered view-dependent texture map representation which supports efficient storage, transmission and rendering from multiple view video capture; and a rendering approach that combines multiple 4DVT sequences in a parametric motion space, maintaining video quality rendering of dynamic surface appearance whilst allowing high-level interactive control of character motion and viewpoint. 4DVT is demonstrated for multiple characters and evaluated both quantitatively and through a user-study which confirms that the visual quality of captured video is maintained. The 4DVT representation achieves >90% reduction in size and halves the rendering cost.Item Accurate and Efficient Lighting for Skinned Models(The Eurographics Association and John Wiley and Sons Ltd., 2014) Tarini, Marco; Panozzo, Daniele; Sorkine-Hornung, Olga; B. Levy and J. KautzIn the context of real-time, GPU-based rendering of animated skinned meshes, we propose a new algorithm to compute surface normals with minimal overhead both in terms of the memory footprint and the required per-vertex operations. By accounting for the variation of the skinning weights over the surface, we achieve a higher visual quality compared to the standard approximation ubiquitously used in video-game engines and other real-time applications. Our method supports Linear Blend Skinning and Dual Quaternion Skinning. We demonstrate the advantages of our technique on a variety of datasets and provide a complete open-source implementation, including GLSL shaders.Item Adaptive Texture Space Shading for Stochastic Rendering(The Eurographics Association and John Wiley and Sons Ltd., 2014) Andersson, Magnus; Hasselgren, Jon; Toth, Robert; Akenine-Möller, Tomas; B. Levy and J. KautzWhen rendering effects such as motion blur and defocus blur, shading can become very expensive if done in a naïve way, i.e. shading each visibility sample. To improve performance, previous work often decouple shading from visibility sampling using shader caching algorithms. We present a novel technique for reusing shading in a stochastic rasterizer. Shading is computed hierarchically and sparsely in an object-space texture, and by selecting an appropriate mipmap level for each triangle, we ensure that the shading rate is sufficiently high so that no noticeable blurring is introduced in the rendered image. Furthermore, with a two-pass algorithm, we separate shading from reuse and thus avoid GPU thread synchronization. Our method runs at real-time frame rates and is up to 3x faster than previous methods. This is an important step forward for stochastic rasterization in real time.Item Analogy-Driven 3D Style Transfer(The Eurographics Association and John Wiley and Sons Ltd., 2014) Ma, Chongyang; Huang, Haibin; Sheffer, Alla; Kalogerakis, Evangelos; Wang, Rui; B. Levy and J. KautzStyle transfer aims to apply the style of an exemplar model to a target one, while retaining the target s structure. The main challenge in this process is to algorithmically distinguish style from structure, a high-level, potentially ill-posed cognitive task. Inspired by cognitive science research we recast style transfer in terms of shape analogies. In IQ testing, shape analogy queries present the subject with three shapes: source, target and exemplar, and ask them to select an output such that the transformation, or analogy, from the exemplar to the output is similar to that from the source to the target. The logical process involved in identifying the source-to-target analogies implicitly detects the structural differences between the source and target and can be used effectively to facilitate style transfer. Since the exemplar has a similar structure to the source, applying the analogy to the exemplar will provide the output we seek. The main technical challenge we address is to compute the source to target analogies, consistent with human logic. We observe that the typical analogies we look for consist of a small set of simple transformations, which when applied to the exemplar generate a continuous, seamless output model. To assemble a shape analogy, we compute an optimal set of source-to-target transformations, such that the assembled analogy best fits these criteria. The assembled analogy is then applied to the exemplar shape to produce the desired output model. We use the proposed framework to seamlessly transfer a variety of style properties between 2D and 3D objects and demonstrate significant improvements over the state of the art in style transfer. We further show that our framework can be used to successfully complete partial scans with the help of a user provided structural template, coherently propagating scan style across the completed surfaces.Item AR-TagBrowse: Annotating and Browsing 3D Objects on Mobile Devices(The Eurographics Association, 2014) Eftaxopoulos, Evangelos; Vasilakis, Andreas; Fudos, Ioannis; Mathias Paulin and Carsten DachsbacherWe report on the development of a novel interactive augmented reality app called AR-TagBrowse, built on Unity 3D that enables users to tag and browse 3D objects. Users upload 3D objects (polygonal representation and diffuse maps) through a web server. 3D objects are then linked to real world information such as images and GPS location. Users may optionally segment the objects into areas of interest. Such objects will subsequently pop up in the AR-TagBrowse app when one of these events is detected (visible location or image). The user is then capable of interactively viewing the 3D object, browsing tags or entering new tags providing comments or information for specific parts of the object.Item Art-Photographic Detail Enhancement(The Eurographics Association and John Wiley and Sons Ltd., 2014) Son, Minjung; Lee, Yunjin; Kang, Henry; Lee, Seungyong; B. Levy and J. KautzWe present a novel method for enhancing details in a digital photograph, inspired by the principle of art photography. In contrast to the previous methods that primarily rely on tone scaling, our technique provides a flexible tone transform model that consists of two operators: shifting and scaling. This model permits shifting of the tonal range in each image region to enable significant detail boosting regardless of the original tone. We optimize these shift and scale factors in our constrained optimization framework to achieve extreme detail enhancement across the image in a piecewise smooth fashion, as in art photography. The experimental results show that the proposed method brings out a significantly large amount of details even from an ordinary low-dynamic range image.Item Automatic Generation of Tourist Brochures(The Eurographics Association and John Wiley and Sons Ltd., 2014) Birsak, Michael; Musialski, Przemyslaw; Wonka, Peter; Wimmer, Michael; B. Levy and J. KautzWe present a novel framework for the automatic generation of tourist brochures that include routing instructions and additional information presented in the form of so-called detail lenses. The first contribution of this paper is the automatic creation of layouts for the brochures. Our approach is based on the minimization of an energy function that combines multiple goals: positioning of the lenses as close as possible to the corresponding region shown in an overview map, keeping the number of lenses low, and an efficient numbering of the lenses. The second contribution is a route-aware simplification of the graph of streets used for traveling between the points of interest (POIs). This is done by reducing the graph consisting of all shortest paths through the minimization of an energy function. The output is a subset of street segments that enable traveling between all the POIs without considerable detours, while at the same time guaranteeing a clutter-free visualization.Item Bayesian and Quasi Monte Carlo Spherical Integration for Illumination Integrals(The Eurographics Association, 2014) Marques, Ricardo; Bouville, Christian; Bouatouch, Kadi; Nicolas Holzschuch and Karol MyszkowskiThe spherical sampling of the incident radiance function entails a high computational cost. Therefore the illumination integral must be evaluated using a limited set of samples. Such a restriction raises the question of how to obtain the most accurate approximation possible with such a limited set of samples. We need to ensure that sampling produces the highest amount of information possible by carefully placing the limited set of samples. Furthermore we want our integral evaluation to take into account not only the information produced by the sampling but also possible information available prior to sampling. In this tutorial we focus on the case of hemispherical sampling for spherical Monte Carlo (MC) integration. We will show that existing techniques can be improved by making a detailed analysis of the theory of MC spherical integration. We will then use this theory to identify and improve the weak points of current approaches, based on very recent advances in the fields of integration and spherical Quasi-Monte Carlo integration.Item Clean Color: Improving Multi-filament 3D Prints(The Eurographics Association and John Wiley and Sons Ltd., 2014) Hergel, Jean; Lefebvre, Sylvain; B. Levy and J. KautzFused Filament Fabrication is an additive manufacturing process by which a 3D object is created from plastic filament. The filament is pushed through a hot nozzle where it melts. The nozzle deposits plastic layer after layer to create the final object. This process has been popularized by the RepRap community. Several printers feature multiple extruders, allowing objects to be formed from multiple materials or colors. The extruders are mounted side by side on the printer carriage. However, the print quality suffers when objects with color patterns are printed a disappointment for designers interested in 3D printing their colored digital models. The most severe issue is the oozing of plastic from the idle extruders: Plastics of different colors bleed onto each other giving the surface a smudged aspect, excess strings oozing from the extruder deposit on the surface, and holes appear due to this missing plastic. Fixing this issue is difficult: increasing the printing speed reduces oozing but also degrades surface quality on large prints the required speed level become impractical. Adding a physical mechanism increases cost and print time as extruders travel to a cleaning station. Instead, we rely on software and exploit degrees of freedom of the printing process. We introduce three techniques that complement each other in improving the print quality significantly. We first reduce the impact of oozing plastic by choosing a better azimuth angle for the printed part. We build a disposable rampart in close proximity of the part, giving the extruders the opportunity to wipe oozing strings and refill with hot plastic. We finally introduce a toolpath planner avoiding and hiding most of the defects due to oozing, and seamlessly integrating the rampart. We demonstrate our technique on several challenging multiple color prints, and show that our tool path planner improves the surface finish of single color prints as well.Item Coded Exposure HDR Light-Field Video Recording(The Eurographics Association and John Wiley and Sons Ltd., 2014) Schedl, David C.; Birklbauer, Clemens; Bimber, Oliver; B. Levy and J. KautzCapturing exposure sequences to compute high dynamic range (HDR) images causes motion blur in cases of camera movement. This also applies to light-field cameras: frames rendered from multiple blurred HDR lightfield perspectives are also blurred. While the recording times of exposure sequences cannot be reduced for a single-sensor camera, we demonstrate how this can be achieved for a camera array. Thus, we decrease capturing time and reduce motion blur for HDR light-field video recording. Applying a spatio-temporal exposure pattern while capturing frames with a camera array reduces the overall recording time and enables the estimation of camera movement within one light-field video frame. By estimating depth maps and local point spread functions (PSFs) from multiple perspectives with the same exposure, regional motion deblurring can be supported. Missing exposures at various perspectives are then interpolated.Item Combining Inertial Navigation and ICP for Real-time 3D Surface Reconstruction(The Eurographics Association, 2014) Nießner, Matthias; Dai, Angela; Fisher, Matthew; Eric Galin and Michael WandWe present a novel method to improve the robustness of real-time 3D surface reconstruction by incorporating inertial sensor data when determining inter-frame alignment. With commodity inertial sensors, we can significantly reduce the number of iterative closest point (ICP) iterations required per frame. Our system is also able to determine when ICP tracking becomes unreliable and use inertial navigation to correctly recover tracking, even after significant time has elapsed. This enables less experienced users to more quickly acquire 3D scans. We apply our framework to several different surface reconstruction tasks and demonstrate that enabling inertial navigation allows us to reconstruct scenes more quickly and recover from situations where reconstructing without IMU data produces very poor results.Item Compressing Dynamic Meshes with Geometric Laplacians(The Eurographics Association and John Wiley and Sons Ltd., 2014) Vasa, Libor; Marras, Stefano; Hormann, Kai; Brunnett, Guido; B. Levy and J. KautzThis paper addresses the problem of representing dynamic 3D meshes in a compact way, so that they can be stored and transmitted efficiently. We focus on sequences of triangle meshes with shared connectivity, avoiding the necessity of having a skinning structure. Our method first computes an average mesh of the whole sequence in edge shape space. A discrete geometric Laplacian of this average surface is then used to encode the coefficients that describe the trajectories of the mesh vertices. Optionally, a novel spatio-temporal predictor may be applied to the trajectories to further improve the compression rate. We demonstrate that our approach outperforms the current state of the art in terms of low data rate at a given perceived distortion, as measured by the STED and KG error metrics.Item Content-Aware Surface Parameterization for Interactive Restoration of Historical Documents(The Eurographics Association and John Wiley and Sons Ltd., 2014) Pal, Kazim; Schüller, Christian; Panozzo, Daniele; Sorkine-Hornung, Olga; Weyrich, Tim; B. Levy and J. KautzWe present an interactive method to restore severely damaged historical parchments. When damaged by heat in a fire, such manuscripts undergo a complex deformation and contain various geometric distortions such as wrinkling, buckling, and shrinking, rendering them nearly illegible. They cannot be physically flattened due to the risk of further damage. We propose a virtual restoration framework to estimate the non-rigid deformation the parchment underwent and to revert it, making reading the text significantly easier whilst maintaining the veracity of the textual content. We estimate the deformation by combining automatically extracted constraints with user-provided hints informed by domain knowledge. We demonstrate that our method successfully flattens and straightens the text on a variety of pages scanned from a 17th century document which fell victim to fire damage.Item Crack-free Rendering of Dynamically Tesselated B-Rep Models(The Eurographics Association and John Wiley and Sons Ltd., 2014) Claux, Frédéric; Barthe, Loïc; Vanderhaeghe, David; Jessel, Jean-Pierre; Paulin, Mathias; B. Levy and J. KautzWe propose a versatile pipeline to render B-Rep models interactively, precisely and without rendering-related artifacts such as cracks. Our rendering method is based on dynamic surface evaluation using both tesselation and ray-casting, and direct GPU surface trimming. An initial rendering of the scene is performed using dynamic tesselation. The algorithm we propose reliably detects then fills up cracks in the rendered image. Crack detection works in image space, using depth information, while crack-filling is either achieved in image space using a simple classification process, or performed in object space through selective ray-casting. The crack filling method can be dynamically changed at runtime. Our image space crack filling approach has a limited runtime cost and enables high quality, real-time navigation. Our higher quality, object space approach results in a rendering of similar quality than full-scene ray-casting, but is 2 to 6 times faster, can be used during navigation and provides accurate, reliable rendering. Integration of our work with existing tesselation-based rendering engines is straightforward.Item Crowd Sculpting: A Space-time Sculpting Method for Populating Virtual Environments(The Eurographics Association and John Wiley and Sons Ltd., 2014) Jordao, Kevin; Pettré, Julien; Christie, Marc; Cani, Marie-Paule; B. Levy and J. KautzWe introduce "Crowd Sculpting": a method to interactively design populated environments by using intuitive deformation gestures to drive both the spatial coverage and the temporal sequencing of a crowd motion. Our approach assembles large environments from sets of spatial elements which contain inter-connectible, periodic crowd animations. Such a Crowd Patches approach allows us to avoid expensive and difficult-to-control simulations. It also overcomes the limitations of motion editing, that would result into animations delimited in space and time. Our novel methods allows the user to control the crowd patches layout in ways inspired by elastic shape sculpting: the user creates and tunes the desired populated environment through stretching, bending, cutting and merging gestures, applied either in space or time. Our examples demonstrate that our method allows the space-time editing of very large populations and results into endless animation, while offering real-time, intuitive control and maintaining animation quality.Item CSG Feature Trees from Engineering Sketches of Polyhedral Shapes(The Eurographics Association, 2014) Plumed, Raquel; Company, Pedro; Varley, Peter A. C.; Martin, Ralph R.; Eric Galin and Michael WandWe give a method to obtain a 3D CSG model from a 2D engineering wireframe sketch which depicts a polyhedral shape. The method finds a CSG feature tree compatible with a reverse design history of a 2D line-drawing obtained by vectorising the sketch. The process used seeks the CSG feature tree recursively, combining all design or manufacturing features embedded in the sketch, proceeding in reverse order from the most detailed features to the blank.Item Data Driven Assembly of Procedurally Modeled Facilities(The Eurographics Association, 2014) Bishop, M. Scott; Ferrer, Josè; Max, Nelson; Eric Galin and Michael WandWe present a method to arrange components of industrial facilities in a constrained site footprint. We use a probabilistic graphical model of industrial sites and existing procedural modeling methods to automate the assembly and 3D modeling of wastewater treatment plants. A knowledge engineered approach produces a combination of components that inherently contains domain specific information like process dependencies and facility size. The inferred combination is laid out using mathematical optimization or via a physics-based simulation resulting in an arrangement that respects the industrial process and design plausibility.Item Data-Driven Video Completion(The Eurographics Association, 2014) Ilan, Shachar; Shamir, Ariel; Sylvain Lefebvre and Michela SpagnuoloImage completion techniques aim to complete selected regions of an image in a natural looking manner with little or no user interaction. Video Completion, the space-time equivalent of the image completion problem, inherits and extends both the difficulties and the solutions of the original 2D problem, but also imposes new ones - mainly temporal coherency and space complexity (videos contain significantly more information than images). Datadriven approaches to completion have been established as a favored choice, especially when large regions have to be filled. In this report we present the current state-of-the-art in data-driven video completion techniques. For unacquainted researchers, we aim to provide a broad yet easy to follow introduction to the subject and early guidance to the challenges ahead. For a versed reader, we offer a comprehensive review of the contemporary techniques, sectioned out by their approaches to key aspects of the problem.