38-Issue 7
Permanent URI for this collection
Browse
Browsing 38-Issue 7 by Subject "centered computing"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item A Generalized Cubemap for Encoding 360° VR Videos using Polynomial Approximation(The Eurographics Association and John Wiley & Sons Ltd., 2019) Xiao, Jianye; Tang, Jingtao; Zhang, Xinyu; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon360° VR videos provide users with an immersive visual experience. To encode 360° VR videos, spherical pixels must be mapped onto a two-dimensional domain to take advantage of the existing video encoding and storage standards. In VR industry, standard cubemap projection is the most widely used projection method for encoding 360° VR videos. However, it exhibits pixel density variation at different regions due to projection distortion. We present a generalized algorithm to improve the efficiency of cubemap projection using polynomial approximation. In our algorithm, standard cubemap projection can be regarded as a special form with 1st-order polynomial. Our experiments show that the generalized cubemap projection can significantly reduce the projection distortion using higher order polynomials. As a result, pixel distribution can be well balanced in the resulting 360° VR videos. We use PSNR, S-PSNR and CPP-PSNR to evaluate the visual quality and the experimental results demonstrate promising performance improvement against standard cubemap projection and Google's equi-angular cubemap.Item Interactive Curation of Datasets for Training and Refining Generative Models(The Eurographics Association and John Wiley & Sons Ltd., 2019) Ye, Wenjie; Dong, Yue; Peers, Pieter; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonWe present a novel interactive learning-based method for curating datasets using user-defined criteria for training and refining Generative Adversarial Networks. We employ a novel batch-mode active learning strategy to progressively select small batches of candidate exemplars for which the user is asked to indicate whether they match the, possibly subjective, selection criteria. After each batch, a classifier that models the user's intent is refined and subsequently used to select the next batch of candidates. After the selection process ends, the final classifier, trained with limited but adaptively selected training data, is used to sift through the large collection of input exemplars to extract a sufficiently large subset for training or refining the generative model that matches the user's selection criteria. A key distinguishing feature of our system is that we do not assume that the user can always make a firm binary decision (i.e., ''meets'' or ''does not meet'' the selection criteria) for each candidate exemplar, and we allow the user to label an exemplar as ''undecided''. We rely on a non-binary query-by-committee strategy to distinguish between the user's uncertainty and the trained classifier's uncertainty, and develop a novel disagreement distance metric to encourage a diverse candidate set. In addition, a number of optimization strategies are employed to achieve an interactive experience. We demonstrate our interactive curation system on several applications related to training or refining generative models: training a Generative Adversarial Network that meets a user-defined criteria, adjusting the output distribution of an existing generative model, and removing unwanted samples from a generative model.Item ManyLands: A Journey Across 4D Phase Space of Trajectories(The Eurographics Association and John Wiley & Sons Ltd., 2019) Amirkhanov, Aleksandr; Kosiuk, Ilona; Szmolyan, Peter; Amirkhanov, Artem; Mistelbauer, Gabriel; Gröller, Eduard; Raidou, Renata Georgia; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonMathematical models of ordinary differential equations are used to describe and understand biological phenomena. These models are dynamical systems that often describe the time evolution of more than three variables, i.e., their dynamics take place in a multi-dimensional space, called the phase space. Currently, mathematical domain scientists use plots of typical trajectories in the phase space to analyze the qualitative behavior of dynamical systems. These plots are called phase portraits and they perform well for 2D and 3D dynamical systems. However, for 4D, the visual exploration of trajectories becomes challenging, as simple subspace juxtaposition is not sufficient. We propose ManyLands to support mathematical domain scientists in analyzing 4D models of biological systems. By describing the subspaces as Lands, we accompany domain scientists along a continuous journey through 4D HyperLand, 3D SpaceLand, and 2D FlatLand, using seamless transitions. The Lands are also linked to 1D TimeLines. We offer an additional dissected view of trajectories that relies on small-multiple compass-alike pictograms for easy navigation across subspaces and trajectory segments of interest. We show three use cases of 4D dynamical systems from cell biology and biochemistry. An informal evaluation with mathematical experts confirmed that ManyLands helps them to visualize and analyze complex 4D dynamics, while facilitating mathematical experiments and simulations.Item ShutterApp: Spatio-temporal Exposure Control for Videos(The Eurographics Association and John Wiley & Sons Ltd., 2019) Salamon, Nestor; Billeter, Markus; Eisemann, Elmar; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonA camera's shutter controls the incoming light that is reaching the camera sensor. Different shutters lead to wildly different results, and are often used as a tool in movies for artistic purpose, e.g., they can indirectly control the effect of motion blur. However, a physical camera is limited to a single shutter setting at any given moment. ShutterApp enables users to define spatio-temporally-varying virtual shutters that go beyond the options available in real-world camera systems. A user provides a sparse set of annotations that define shutter functions at selected locations in key frames. From this input, our solution defines shutter functions for each pixel of the video sequence using a suitable interpolation technique, which are then employed to derive the output video. Our solution performs in real-time on commodity hardware. Hereby, users can explore different options interactively, leading to a new level of expressiveness without having to rely on specialized hardware or laborious editing.