EG 2015 - Posters
Permanent URI for this collection
Browse
Browsing EG 2015 - Posters by Issue Date
Now showing 1 - 13 of 13
Results Per Page
Sort Options
Item Exploiting the Potential of Image Based Crowd Rendering(The Eurographics Association, 2015) Izquierdo, Maria; Beacco, Alejandro; Pelechano, Nuria; Andujar, Carlos; B. Solenthaler and E. PuppoPer-joint impostors have been used to achieve high performance when rendering thousands of agents, while still allowing us to blend animation. This provides interactively animated crowds and reduces the memory footprint compared to classic impostors. In this poster we exploit the potential of per joint impostors to further increase both visual quality and performance. The CAVAST framework for crowd simulation and rendering has been used to quantitatively evaluate our improvements with the profiling tools that it provides. Since different applications will have different requirements in terms of performance vs. visual quality, we have extended CAVAST with a new user interface to ease this process.Item Deep Learning on a Raspberry Pi for Real Time Face Recognition(The Eurographics Association, 2015) Dürr, Oliver; Pauchard, Yves; Browarnik, Diego; Axthelm, Rebekka; Loeser, Martin; B. Solenthaler and E. PuppoIn this paper we describe a fast and accurate pipeline for real-time face recognition that is based on a convolutional neural network (CNN) and requires only moderate computational resources. After training the CNN on a desktop PC we employed a Raspberry Pi, model B, for the classification procedure. Here, we reached a performance of approximately 2 frames per second and more than 97% recognition accuracy. The proposed approach outperforms all of OpenCV's algorithms with respect to both accuracy and speed and shows the applicability of recent deep learning techniques to hardware with limited computational performanceItem Egocentric Normalization of Kinematic Path(The Eurographics Association, 2015) Molla, Eray; Boulic, Ronan; B. Solenthaler and E. PuppoWe focus on retargetting the class of movements involving self-interactions onto characters with different size and proportion. Such postures may produce self-collisions and/or alter the intended semantics. We introduce a technique to normalize the spatial relationship vectors between the body parts of the source character. This allows for morphological adaptation of these vectors onto the target characters, hence preserving the semantics in postures with and without body-contact.Item Big City 3D Visual Analysis(The Eurographics Association, 2015) Lv, Zhihan; Li, Xiaoming; Zhang, Baoyun; Wang, Weixi; Feng, Shengzhong; Hu, Jinxing; B. Solenthaler and E. PuppoA big city visual analysis platform based on Web Virtual Reality Geographical Information System (WEBVRGIS) is presented. Extensive model editing functions and spatial analysis functions are available, including terrain analysis, spatial analysis, sunlight analysis, traffic analysis, population analysis and community analysis.Item Real-time Content Adaptive Depth Retargeting for Light Field Displays(The Eurographics Association, 2015) Adhikarla, Vamsi Kiran; Marton, Fabio; Barsi, Attila; Kovács, Péter Tamás; Balogh, Tibor; Gobbetti, Enrico; B. Solenthaler and E. PuppoLight field display systems present visual scenes using a set of directional light beams emitted from multiple light sources as if they are emitted from points in a physical scene. These displays offer better angular resolution and therefore provide more depth of field than other automultiscopic displays. However in some cases the size of a scene may still exceed the available depth range of a light field display. Thus, rendering on these displays requires suitable adaptation of 3D content for providing comfortable viewing experience. We propose a content adaptive depth retargeting method to automatically modify the scene depth to suit to the needs of a light field display. By analyzing the scene and using display specific parameters, we formulate and solve an optimization problem to non-linearly adapt the scene depth to display depth. Our method synthesizes the depth retargeted light field content in real-time for supporting interactive visualization and also preserves the 3D appearance of the displayed objects as much as possible.Item EUROGRAPHICS 2015: Posters Frontmatter(Eurographics Association, 2015) Barbara Solenthaler; Enrico Puppo;Item A Computational Model of Light-Sheet Fluorescence Microscopy using Physically-based Rendering(The Eurographics Association, 2015) Abdellah, Marwan; Bilgili, Ahmet; Eilemann, Stefan; Markram, Henry; Schürmann, Felix; B. Solenthaler and E. PuppoWe present a physically-based computational model of the light sheet fluorescence microscope (LSFM). Based on Monte Carlo ray tracing and geometric optics, our method simulates the operational aspects and image formation process of the LSFM. An extension for previous fluorescence models is developed to account for the intrinsic characteristics of fluorescent dyes in order to accurately simulate light interaction with fluorescent-tagged biological specimen. This extension was quantitatively validated against the fluorescence brightness equation and experimental spectra of different dyes. We demonstrate first results of our rendering pipeline to a simplified brain tissue model reconstructed from the somatosensory cortex of a young rat.Item Privacy Protecting, Real-time Face Re-recognition(The Eurographics Association, 2015) Niederberger, Thomas; Hegner, Robert; Hartmann, Andreas; Schuster, Guido M.; B. Solenthaler and E. PuppoWe present a novel system for recognizing human individuals walking past a depth camera that is compatible with privacy protecting laws. The system is developed to support the statistical analysis of movement patterns in indoor spaces. The system is able to re-recognize previously seen individuals but is also capable of recognizing that an individual has not been seen before. The system is designed in a privacy protecting way and does not rely on previously collected training data but rather collects data during run-time. The proposed system processes each image of an individual separately, but we also present a new approach that is based on combining several decisions into a single meta-decision in order to enhance classification performance.Item GHand: A GPU Algorithm for Realtime Hand Pose Estimation Using Depth Camera(The Eurographics Association, 2015) Nanjappa, Ashwin; Xu, Chi; Cheng, Li; B. Solenthaler and E. PuppoWe present GHand, a GPU algorithm for markerless hand pose estimation from a single depth image obtained from a commodity depth camera. Our method uses a dual random forest approach: the first forest estimates position and orientation of hand in 3D, while the second forest determines the joint angles of the kinematic chain of our hand model. GHand runs entirely on GPU, at a speed of 64 FPS with an average 3D joint position error of 20mm. It can detect complex poses with interlocked and occluded fingers and hidden fingertips. It requires no calibration before use, no retraining for differing hand sizes, can be used in top or front mounted setup and with moving camera.Item PHENOM: Interest Points on Photometric Normal Maps(The Eurographics Association, 2015) Sabzevari, Reza; Alak, Eren; Scaramuzza, Davide; B. Solenthaler and E. PuppoThis paper introduces a novel method for extracting features and matching points on images of texture-less surfaces. Feature points are extracted from surface normal maps recovered by Photometric Stereo. Such sparse matching will help to register high detail 3D surfaces reconstructed from multiple view images. Moreover, the geometric constraints imposed by multiple views can be utilized to correct the geometric ambiguity in photometric reconstruction. Experiments show the performance of proposed interest points in matching texture-less objects. Comparison against texture-based interest points shows that the proposed features based on normal maps perform effectively.Item SpatialWhiteboard: A New Wearable Air-Writing Interaction with Kinect Sensor and Vibrating Ring Interface(The Eurographics Association, 2015) Yeom, Kiwon; Han, Hyejin; Oh, Yoonsik; Kwon, Jeunghum; You, Bum-Jae; B. Solenthaler and E. PuppoSpatialWhiteboard is a spatial finger writing system that enables complex spatial interactions through 3D handwriting via a Kinect sensor and vibrating ring interface. By incorporating depth and skin color information, we can directly separate the hand from the cluttered background. We can also accurately differentiate the fingertip from the hand by a mixture model of distance transforming and osculating circle. Users receive physical feedback in the form of vibrations from the wearable ring interface as their finger reaches a certain 3D position. Thus, it is now conceivable that anything people can do on contemporary touch based devices, they could do in midair with a pseudocontact interface.Item Evaluation of the compressibility of Computer-Generated Holograms(The Eurographics Association, 2015) Pimenta, Waldir; Santos, Luis P.; B. Solenthaler and E. PuppoWe present a preliminary investigation of the compressibility of Physically-Based Computer-Generated Holograms (PB CGHs) as represented in various bases. The goal is to identify which bases, if any, are suitable for applying the principles of Compressed Sensing (CS) to the generation of PB CGHs. The Fourier, DCT and Haar wavelet bases were selected, as a representative sample of the time-frequency spectrum of representation bases, and evaluated according to several quality metrics. Contrary to what previous research suggested, we found that the DCT basis, not the Haar wavelet one, in general yielded better results.Item Combining Human Visual System Models for Geographic Gaze Contingent Displays(The Eurographics Association, 2015) Bektas, Kenan; Çöltekin, Arzu; Krüger, Jens; Duchowski, Andrew T.; B. Solenthaler and E. PuppoWe present a gaze-contingent display (GCD) in which we combine multiple models of the human visual system (HVS) to manage the visual level of detail (LOD). GCDs respond to the viewer's gaze in real time, rendering a space-variant visualization. We aim to measure the computational and perceptual benefits of the proposed HVS models in terms of data reduction and user experience. Specifically, we combine models of contrast sensitivity, color perception and depth of field; and customize our implementation for geographic imagery. We believe this research is relevant in all domains that use image interpretation.