38-Issue 7
Permanent URI for this collection
Browse
Browsing 38-Issue 7 by Subject "based rendering"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Deep Video-Based Performance Synthesis from Sparse Multi-View Capture(The Eurographics Association and John Wiley & Sons Ltd., 2019) Chen, Mingjia; Wang, Changbo; Liu, Ligang; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonWe present a deep learning based technique that enables novel-view videos of human performances to be synthesized from sparse multi-view captures. While performance capturing from a sparse set of videos has received significant attention, there has been relatively less progress which is about non-rigid objects (e.g., human bodies). The rich articulation modes of human body make it rather challenging to synthesize and interpolate the model well. To address this problem, we propose a novel deep learning based framework that directly predicts novel-view videos of human performances without explicit 3D reconstruction. Our method is a composition of two steps: novel-view prediction and detail enhancement. We first learn a novel deep generative query network for view prediction. We synthesize novel-view performances from a sparse set of just five or less camera videos. Then, we use a new generative adversarial network to enhance fine-scale details of the first step results. This opens up the possibility of high-quality low-cost video-based performance synthesis, which is gaining popularity for VA and AR applications. We demonstrate a variety of promising results, where our method is able to synthesis more robust and accurate performances than existing state-of-the-art approaches when only sparse views are available.Item Light Field Video Compression and Real Time Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2019) Hajisharif, Saghi; Miandji, Ehsan; Larsson, Per; Tran, Kiet; Unger, Jonas; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonLight field imaging is rapidly becoming an established method for generating flexible image based description of scene appearances. Compared to classical 2D imaging techniques, the angular information included in light fields enables effects such as post-capture refocusing and the exploration of the scene from different vantage points. In this paper, we describe a novel GPU pipeline for compression and real-time rendering of light field videos with full parallax. To achieve this, we employ a dictionary learning approach and train an ensemble of dictionaries capable of efficiently representing light field video data using highly sparse coefficient sets. A novel, key element in our representation is that we simultaneously compress both image data (pixel colors) and the auxiliary information (depth, disparity, or optical flow) required for view interpolation. During playback, the coefficients are streamed to the GPU where the light field and the auxiliary information are reconstructed using the dictionary ensemble and view interpolation is performed. In order to realize the pipeline we present several technical contributions including a denoising scheme enhancing the sparsity in the dataset which enables higher compression ratios, and a novel pruning strategy which reduces the size of the dictionary ensemble and leads to significant reductions in computational complexity during the encoding of a light field. Our approach is independent of the light field parameterization and can be used with data from any light field video capture system. To demonstrate the usefulness of our pipeline, we utilize various publicly available light field video datasets and discuss the medical application of documenting heart surgery.