38-Issue 7
Permanent URI for this collection
Browse
Browsing 38-Issue 7 by Subject "based models"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item FontRNN: Generating Large-scale Chinese Fonts via Recurrent Neural Network(The Eurographics Association and John Wiley & Sons Ltd., 2019) Tang, Shusen; Xia, Zeqing; Lian, Zhouhui; Tang, Yingmin; Xiao, Jianguo; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonDespite the recent impressive development of deep neural networks, using deep learning based methods to generate largescale Chinese fonts is still a rather challenging task due to the huge number of intricate Chinese glyphs, e.g., the official standard Chinese charset GB18030-2000 consists of 27,533 Chinese characters. Until now, most existing models for this task adopt Convolutional Neural Networks (CNNs) to generate bitmap images of Chinese characters due to CNN based models' remarkable success in various applications. However, CNN based models focus more on image-level features while usually ignore stroke order information when writing characters. Instead, we treat Chinese characters as sequences of points (i.e., writing trajectories) and propose to handle this task via an effective Recurrent Neural Network (RNN) model with monotonic attention mechanism, which can learn from as few as hundreds of training samples and then synthesize glyphs for remaining thousands of characters in the same style. Experimental results show that our proposed FontRNN can be used for synthesizing large-scale Chinese fonts as well as generating realistic Chinese handwritings efficiently.Item High Dynamic Range Point Clouds for Real-Time Relighting(The Eurographics Association and John Wiley & Sons Ltd., 2019) Sabbadin, Manuele; Palma, Gianpaolo; BANTERLE, FRANCESCO; Boubekeur, Tamy; Cignoni, Paolo; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonAcquired 3D point clouds make possible quick modeling of virtual scenes from the real world.With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the light transport hidden in the recorded per-sample color response to relight virtual objects in visual effects (VFX) look-dev or augmented reality (AR) scenarios. Typically, standard relighting environment exploits global environment maps together with a collection of local light probes to reflect the light mood of the real scene on the virtual object. We propose instead a unified spatial approximation of the radiance and visibility relationships present in the scene, in the form of a colored point cloud. To do so, our method relies on two core components: High Dynamic Range (HDR) expansion and real-time Point-Based Global Illumination (PBGI). First, since an acquired color point cloud typically comes in Low Dynamic Range (LDR) format, we boost it using a single HDR photo exemplar of the captured scene that can cover part of it. We perform this expansion efficiently by first expanding the dynamic range of a set of renderings of the point cloud and then projecting these renderings on the original cloud. At this stage, we propagate the expansion to the regions not covered by the renderings or with low-quality dynamic range by solving a Poisson system. Then, at rendering time, we use the resulting HDR point cloud to relight virtual objects, providing a diffuse model of the indirect illumination propagated by the environment. To do so, we design a PBGI algorithm that exploits the GPU's geometry shader stage as well as a new mipmapping operator, tailored for G-buffers, to achieve real-time performances. As a result, our method can effectively relight virtual objects exhibiting diffuse and glossy physically-based materials in real time. Furthermore, it accounts for the spatial embedding of the object within the 3D environment. We evaluate our approach on manufactured scenes to assess the error introduced at every step from the perfect ground truth. We also report experiments with real captured data, covering a range of capture technologies, from active scanning to multiview stereo reconstruction.Item Pyramid Multi-View Stereo with Local Consistency(The Eurographics Association and John Wiley & Sons Ltd., 2019) Liao, Jie; Fu, Yanping; Yan, Qingan; Xiao, Chunxia; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonIn this paper, we propose a PatchMatch-based Multi-View Stereo (MVS) algorithm which can efficiently estimate geometry for the textureless area. Conventional PatchMatch-based MVS algorithms estimate depth and normal hypotheses mainly by optimizing photometric consistency metrics between patch in the reference image and its projection on other images. The photometric consistency works well in textured regions but can not discriminate textureless regions, which makes geometry estimation for textureless regions hard work. To address this issue, we introduce the local consistency. Based on the assumption that neighboring pixels with similar colors likely belong to the same surface and share approximate depth-normal values, local consistency guides the depth and normal estimation with geometry from neighboring pixels with similar colors. To fasten the convergence of pixelwise local consistency across the image, we further introduce a pyramid architecture similar to previous work which can also provide coarse estimation at upper levels. We validate the effectiveness of our method on the ETH3D benchmark and Tanks and Temples benchmark. Results show that our method outperforms the state-of-the-art.Item Reliable Rolling-guided Point Normal Filtering for Surface Texture Removal(The Eurographics Association and John Wiley & Sons Ltd., 2019) Sun, Yangxing; Chen, Honghua; Qin, Jing; Li, Hongwei; Wei, Mingqiang; Zong, Hua; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonSemantic surface decomposition (SSD) facilitates various geometry processing and product re-design tasks. Filter-based techniques are meaningful and widely used to achieve the SSD, which however often leads to surface either under-fitting or overfitting. In this paper, we propose a reliable rolling-guided point normal filtering method to decompose textures from a captured point cloud surface. Our method is built on the geometry assumption that 3D surfaces are comprised of an underlying shape (US) and a variety of bump ups and downs (BUDs) on the US. We have three core contributions. First, by considering the BUDs as surface textures, we present a RANSAC-based sub-neighborhood detection scheme to distinguish the US and the textures. Second, to better preserve the US (especially the prominent structures), we introduce a patch shift scheme to estimate the guidance normal for feeding the rolling-guided filter. Third, we formulate a new position updating scheme to alleviate the common uneven distribution of points. Both visual and numerical experiments demonstrate that our method is comparable to state-of-the-art methods in terms of the robustness of texture removal and the effectiveness of the underlying shape preservation.Item Visibility-Aware Progressive Farthest Point Sampling on the GPU(The Eurographics Association and John Wiley & Sons Ltd., 2019) Brandt, Sascha; Jähn, Claudius; Fischer, Matthias; Heide, Friedhelm Meyer auf der; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonIn this paper, we present the first algorithm for progressive sampling of 3D surfaces with blue noise characteristics that runs entirely on the GPU. The performance of our algorithm is comparable to state-of-the-art GPU Poisson-disk sampling methods, while additionally producing ordered sequences of samples where every prefix exhibits good blue noise properties. The basic idea is, to reduce the 3D sampling domain to a set of 2.5D images which we sample in parallel utilizing the rasterization hardware of current GPUs. This allows for simple visibility-aware sampling that only captures the surface as seen from outside the sampled object, which is especially useful for point-based level-of-detail rendering methods. However, our method can be easily extended for sampling the entire surface without changing the basic algorithm. We provide a statistical analysis of our algorithm and show that it produces good blue noise characteristics for every prefix of the resulting sample sequence and analyze the performance of our method compared to related state-of-the-art sampling methods.