VMV15
Permanent URI for this collection
Browse
Browsing VMV15 by Subject "I.3.3 [Computer Graphics]"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Efficient GPU Based Sampling for Scene-Space Video Processing(The Eurographics Association, 2015) Klose, Felix; Wang, Oliver; Bazin, Jean-Charles; Magnor, Marcus; Sorkine-Hornung, Alexander; David Bommes and Tobias Ritschel and Thomas SchultzWe describe a method to efficiently collect and filter a large set of 2D pixel observations of unstructured 3D points, with applications to scene-space aware video processing. One of the main challenges in scene-space video processing is to achieve reasonable computation time despite the very large volumes of data, often in the order of billions of pixels. The bottleneck is determining a suitable set of candidate samples used to compute each output video pixel color. These samples are observations of the same 3D point, and must be gathered from a large number of candidate pixels, by volumetric 3D queries in scene-space. Our approach takes advantage of the spatial and temporal continuity inherent to video to greatly reduce the candidate set of samples by solving 3D volumetric queries directly on a series of 2D projections, using out-of-core data streaming and an efficient GPU producerconsumer scheme that maximizes hardware utilization by exploiting memory locality. Our system is capable of processing over a trillion pixel samples, enabling various scene-space video processing applications on full HD video output with hundreds of frames and processing times in the order of a few minutes.Item Extrapolating Large-Scale Material BTFs under Cross-Device Constraints(The Eurographics Association, 2015) Steinhausen, Heinz Christian; Brok, Dennis den; Hullin, Matthias B.; Klein, Reinhard; David Bommes and Tobias Ritschel and Thomas SchultzIn this paper, we address the problem of acquiring bidirectional texture functions (BTFs) of large-scale material samples. Our approach fuses gonioreflectometric measurements of small samples with few constraint images taken on a flatbed scanner under semi-controlled conditions. Underlying our method is a lightweight texture synthesis scheme using a local texture descriptor that combines shading and albedo across devices. Since it operates directly on SVD-compressed BTF data, our method is computationally efficient and can be implemented on a moderate memory footprint.Item Hierarchical Hashing for Pattern Search in 3D Vector Fields(The Eurographics Association, 2015) Wang, Zhongjie; Seidel, Hans-Peter; Weinkauf, Tino; David Bommes and Tobias Ritschel and Thomas SchultzThe expressiveness of many visualization methods for 3D vector fields is often limited by occlusion, i.e., interesting flow patterns hide each other or are hidden by laminar flow. Automatic detection of patterns in 3D vector fields has gained attention recently, since it allows to highlight user-defined patterns and separate the wheat from the chaff. We propose an algorithm which is able to detect 3D flow patterns of arbitrary extent in a robust manner. We encode the local flow behavior in scale space using a sequence of hierarchical base descriptors, which are pre-computed and hashed into a number of hash tables. This ensures a fast fetching of similar occurrences in the flow and requires only a constant number of table lookups. In contrast to many previous approaches, our method supports patterns of arbitrary shape and extent. We achieve this by assembling these patterns using several smaller spheres. The results are independent of translation, rotation, and scaling. Our experiments show that our approach encompasses the state of the art with respect to both the computational costs and the accuracy.Item Rotoscoping on Stereoscopic Images and Videos(The Eurographics Association, 2015) Bukenberger, Dennis R.; Schwarz, Katharina; Groh, Fabian; Lensch, Hendrik P. A.; David Bommes and Tobias Ritschel and Thomas SchultzCreating an animation based on video footage (rotoscoping) often requires significant manual work. For monoscopic videos diverse publications already feature (semi-)automatic techniques to apply non-photorealistic image abstraction (NPR) to videos. This paper addresses abstraction of 3D stereo content minimizing stereoscopic discomfort in images and videos. We introduce a completely autonomous framework that enhances stereo and temporal consistency. Stereoscopic coherence with consistent textures for both eyes is produced by warping the left and right images into a central disparity domain followed by mapping them back to the left and right view. Smooth movements with reduced flickering are achieved by considering optical flow in the propagation of abstract features between frames. The results show significant improvements of stereo consistency without discomforting artifacts in the depth perception. We extend existing stroke based rendering (SBR) for higher accuracy at strong image gradients. Furthermore, we demonstrate that our stereo framework is easily applicable to other point-based abstraction styles. Finally, we evaluate the stereo consistency of our results in a small user study and show that the comfort of the visual appearance is maintained.Item Simple, Robust, Constant-Time Bounds on Surface Geodesic Distances using Point Landmarks(The Eurographics Association, 2015) Burghard, Oliver; Klein, Reinhard; David Bommes and Tobias Ritschel and Thomas SchultzIn this paper we exploit redundant information in geodesic distance fields for a quick approximation of all-pair distances. Starting with geodesic distance fields of equally distributed landmarks we analyze the lower and upper bound resulting from the triangle inequality and show that both bounds converge reasonably fast to the original distance field. The lower bound has itself a bounded relative error, fulfills the triangle equation and under mild conditions is a distance metric. While the absolute error of both bounds is smaller than the maximal landmark distances, the upper bound often exhibits smaller error close to the cut locus. Both the lower and upper bound are simple to implement and quickly to evaluate with a constant-time effort for point-to-point distances, which are often required by various algorithms.Item Temporal Coherence Predictor for Time Varying Volume Data Based on Perceptual Functions(The Eurographics Association, 2015) Noonan, Tom; Campoalegre, Lazaro; Dingliana, John; David Bommes and Tobias Ritschel and Thomas SchultzThis paper introduces an empirical, perceptually-based method which exploits the temporal coherence in consecutive frames to reduce the CPU-GPU traffic size during real-time visualization of time-varying volume data. In this new scheme, a multi-threaded CPU mechanism simulates GPU pre-rendering functions to characterize the local behaviour of the volume. These functions exploit the temporal coherence in the data to reduce the sending of complete per frame datasets to the GPU. These predictive computations are designed to be simple enough to be run in parallel on the CPU while improving the general performance of GPU rendering. Tests performed provide evidence that we are able to reduce considerably the texture size transferred at each frame without losing visual quality while maintaining performance compared to the sending of entire frames to the GPU. The proposed framework is designed to be scalable to Client/Server network based implementations to deal with multi-user systems.