Browsing by Author "Gross, Markus"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item 2017 Cover Image: Mixing Bowl(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Marra, Alessia; Nitti, Maurizio; Papas, Marios; Müller, Thomas; Gross, Markus; Jarosz, Wojciech; ovák, Jan; Chen, Min and Zhang, Hao (Richard)Item Correlated Point Sampling for Geospatial Scalar Field Visualization(The Eurographics Association, 2018) Roveri, Riccardo; Lehmann, Dirk J.; Gross, Markus; Günther, Tobias; Beck, Fabian and Dachsbacher, Carsten and Sadlo, FilipMulti-variate visualizations of geospatial data often use combinations of different visual cues, such as color and texture. For textures, different point distributions (blue noise, regular grids, etc.) can encode nominal data. In this paper, we study the suitability of point distribution interpolation to encode quantitative information. For the interpolation, we use a texture synthesis algorithm, which paves the path towards an encoding of quantitative data using points. First, we conduct a user study to perceptually linearize the transitions between uniform point distributions, including blue noise, regular grids and hexagonal grids. Based on the linearization models, we implement a point sampling-based visualization for geospatial scalar fields and we assess the accuracy of the user perception abilities by comparing the perceived transition with the transition expected from our linearized models. We illustrate our technique on several real geospatial data sets, in which users identify regions with a certain distribution. Point distributions work well in combination with color data, as they require little space and allow the user to see through to the underlying color maps. We found that interpolations between blue noise and regular grids worked perceptively best among the tested candidates.Item Deep Fluids: A Generative Network for Parameterized Fluid Simulations(The Eurographics Association and John Wiley & Sons Ltd., 2019) Kim, Byungsoo; Azevedo, Vinicius C.; Thuerey, Nils; Kim, Theodore; Gross, Markus; Solenthaler, Barbara; Alliez, Pierre and Pellacini, FabioThis paper presents a novel generative model to synthesize fluid simulations from a set of reduced parameters. A convolutional neural network is trained on a collection of discrete, parameterizable fluid simulation velocity fields. Due to the capability of deep learning architectures to learn representative features of the data, our generative model is able to accurately approximate the training data set, while providing plausible interpolated in-betweens. The proposed generative model is optimized for fluids by a novel loss function that guarantees divergence-free velocity fields at all times. In addition, we demonstrate that we can handle complex parameterizations in reduced spaces, and advance simulations in time by integrating in the latent space with a second network. Our method models a wide variety of fluid behaviors, thus enabling applications such as fast construction of simulations, interpolation of fluids with different parameters, time re-sampling, latent space simulations, and compression of fluid simulation data. Reconstructed velocity fields are generated up to 700x faster than re-simulating the data with the underlying CPU solver, while achieving compression rates of up to 1300x.Item Practical Person-Specific Eye Rigging(The Eurographics Association and John Wiley & Sons Ltd., 2019) Bérard, Pascal; Bradley, Derek; Gross, Markus; Beeler, Thabo; Alliez, Pierre and Pellacini, FabioWe present a novel parametric eye rig for eye animation, including a new multi-view imaging system that can reconstruct eye poses at submillimeter accuracy to which we fit our new rig. This allows us to accurately estimate person-specific eyeball shape, rotation center, interocular distance, visual axis, and other rig parameters resulting in an animation-ready eye rig. We demonstrate the importance of several aspects of eye modeling that are often overlooked, for example that the visual axis is not identical to the optical axis, that it is important to model rotation about the optical axis, and that the rotation center of the eye should be measured accurately for each person. Since accurate rig fitting requires hand annotation of multi-view imagery for several eye gazes, we additionally propose a more user-friendly ''lightweight'' fitting approach, which leverages an average rig created from several pre-captured accurate rigs. Our lightweight rig fitting method allows for the estimation of eyeball shape and eyeball position given only a single pose with a known look-at point (e.g. looking into a camera) and few manual annotations.