37-Issue 7
Permanent URI for this collection
Browse
Browsing 37-Issue 7 by Subject "Computing methodologies"
Now showing 1 - 20 of 30
Results Per Page
Sort Options
Item Automatic Mechanism Modeling from a Single Image with CNNs(The Eurographics Association and John Wiley & Sons Ltd., 2018) Lin, Minmin; Shao, Tianjia; Zheng, Youyi; Ren, Zhong; Weng, Yanlin; Yang, Yin; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesThis paper presents a novel system that enables a fully automatic modeling of both 3D geometry and functionality of a mechanism assembly from a single RGB image. The resulting 3D mechanism model highly resembles the one in the input image with the geometry, mechanical attributes, connectivity, and functionality of all the mechanical parts prescribed in a physically valid way. This challenging task is realized by combining various deep convolutional neural networks to provide high-quality and automatic part detection, segmentation, camera pose estimation and mechanical attributes retrieval for each individual part component. On the top of this, we use a local/global optimization algorithm to establish geometric interdependencies among all the parts while retaining their desired spatial arrangement. We use an interaction graph to abstract the inter-part connection in the resulting mechanism system. If an isolated component is identified in the graph, our system enumerates all the possible solutions to restore the graph connectivity, and outputs the one with the smallest residual error. We have extensively tested our system with a wide range of classic mechanism photos, and experimental results show that the proposed system is able to build high-quality 3D mechanism models without user guidance.Item Binocular Tone Mapping with Improved Overall Contrast and Local Details(The Eurographics Association and John Wiley & Sons Ltd., 2018) Zhang, Zhuming; Hu, Xinghong; Liu, Xueting; Wong, Tien-Tsin; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesTone mapping is a commonly used technique that maps the set of colors in high-dynamic-range (HDR) images to another set of colors in low-dynamic-range (LDR) images, to fit the need for print-outs, LCD monitors and projectors. Unfortunately, during the compression of dynamic range, the overall contrast and local details generally cannot be preserved simultaneously. Recently, with the increased use of stereoscopic devices, the notion of binocular tone mapping has been proposed in the existing research study. However, the existing research lacks the binocular perception study and is unable to generate the optimal binocular pair that presents the most visual content. In this paper, we propose a novel perception-based binocular tone mapping method, that can generate an optimal binocular image pair (generating left and right images simultaneously) from an HDR image that presents the most visual content by designing a binocular perception metric. Our method outperforms the existing method in terms of both visual and time performance.Item Biorthogonal Wavelet Surface Reconstruction Using Partial Integrations(The Eurographics Association and John Wiley & Sons Ltd., 2018) Ren, Xiaohua; Lyu, Luan; He, Xiaowei; Cao, Wei; Yang, Zhixin; Sheng, Bin; Zhang, Yanci; Wu, Enhua; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesWe introduce a new biorthogonal wavelet approach to creating a water-tight surface defined by an implicit function, from a finite set of oriented points. Our approach aims at addressing problems with previous wavelet methods which are not resilient to missing or nonuniformly sampled data. To address the problems, our approach has two key elements. First, by applying a three-dimensional partial integration, we derive a new integral formula to compute the wavelet coefficients without requiring the implicit function to be an indicator function. It can be shown that the previously used formula is a special case of our formula when the integrated function is an indicator function. Second, a simple yet general method is proposed to construct smooth wavelets with small support. With our method, a family of wavelets can be constructed with the same support size as previously used wavelets while having one more degree of continuity. Experiments show that our approach can robustly produce results comparable to those produced by the Fourier and Poisson methods, regardless of the input data being noisy, missing or nonuniform. Moreover, our approach does not need to compute global integrals or solve large linear systems.Item Controlling Stroke Size in Fast Style Transfer with Recurrent Convolutional Neural Network(The Eurographics Association and John Wiley & Sons Ltd., 2018) Yang, Lingchen; Yang, Lumin; Zhao, Mingbo; Zheng, Youyi; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesControlling stroke size in Fast Style Transfer remains a difficult task. So far, only a few attempts have been made towards it, and they still exhibit several deficiencies regarding efficiency, flexibility, and diversity. In this paper, we aim to tackle these problems and propose a recurrent convolutional neural subnetwork, which we call recurrent stroke-pyramid, to control the stroke size in Fast Style Transfer. Compared to the state-of-the-art methods, our method not only achieves competitive results with much fewer parameters but provides more flexibility and efficiency for generalizing to unseen larger stroke size and being able to produce a wide range of stroke sizes with only one residual unit. We further embed the recurrent stroke-pyramid into the Multi-Styles and the Arbitrary-Style models, achieving both style and stroke-size control in an entirely feed-forward manner with two novel run-time control strategies.Item Curvature Continuity Conditions Between Adjacent Toric Surface Patches(The Eurographics Association and John Wiley & Sons Ltd., 2018) Sun, Lanyin; Zhu, Chungang; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesToric surface patch is the multi-sided generalization of classical Bézier surface patch. Geometric continuity of the parametric surface patches plays a crucial role in geometric modeling. In this paper, the necessary and sufficient conditions of curvature continuity between toric surface patches are illustrated with the theory of toric degeneration. Furthermore, some practical sufficient conditions of curvature continuity of toric surface patches are also developed.Item Decomposing Images into Layers with Advanced Color Blending(The Eurographics Association and John Wiley & Sons Ltd., 2018) Koyama, Yuki; Goto, Masataka; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesDigital paintings are often created by compositing semi-transparent layers using various advanced color-blend modes, such as ''color-burn,'' ''multiply,'' and ''screen,'' which can produce interesting non-linear color effects. We propose a method of decomposing an input image into layers with such advanced color blending. Unlike previous layer-decomposition methods, which typically support only linear color-blend modes, ours can handle any user-specified color-blend modes. To enable this, we generalize a previous color-unblending formulation, in which only a specific layering model was considered. We also introduce several techniques for adapting our generalized formulation to practical use, such as the post-processing for refining smoothness. Our method lets users explore possible decompositions to find the one that matches for their purposes by manipulating the target color-blend mode and desired color distribution for each layer, as well as the number of layers. Thus, the output of our method is a layered, easily editable image composition organized in a way that digital artists are familiar with. Our method is useful for remixing existing illustrations, flexibly editing single-layer paintings, and bringing physically painted media (e.g., oil paintings) into a digital workflow.Item Deep Video Stabilization Using Adversarial Networks(The Eurographics Association and John Wiley & Sons Ltd., 2018) Xu, Sen-Zhe; Hu, Jun; Wang, Miao; Mu, Tai-Jiang; Hu, Shi-Min; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesVideo stabilization is necessary for many hand-held shot videos. In the past decades, although various video stabilization methods were proposed based on the smoothing of 2D, 2.5D or 3D camera paths, hardly have there been any deep learning methods to solve this problem. Instead of explicitly estimating and smoothing the camera path, we present a novel online deep learning framework to learn the stabilization transformation for each unsteady frame, given historical steady frames. Our network is composed of a generative network with spatial transformer networks embedded in different layers, and generates a stable frame for the incoming unstable frame by computing an appropriate affine transformation. We also introduce an adversarial network to determine the stability of a piece of video. The network is trained directly using the pair of steady and unsteady videos. Experiments show that our method can produce similar results as traditional methods, moreover, it is capable of handling challenging unsteady video of low quality, where traditional methods fail, such as video with heavy noise or multiple exposures. Our method runs in real time, which is much faster than traditional methods.Item Defocus and Motion Blur Detection with Deep Contextual Features(The Eurographics Association and John Wiley & Sons Ltd., 2018) Kim, Beomseok; Son, Hyeongseok; Park, Seong-Jin; Cho, Sunghyun; Lee, Seungyong; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesWe propose a novel approach for detecting two kinds of partial blur, defocus and motion blur, by training a deep convolutional neural network. Existing blur detection methods concentrate on designing low-level features, but those features have difficulty in detecting blur in homogeneous regions without enough textures or edges. To handle such regions, we propose a deep encoder-decoder network with long residual skip-connections and multi-scale reconstruction loss functions to exploit high-level contextual features as well as low-level structural features. Another difficulty in partial blur detection is that there are no available datasets with images having both defocus and motion blur together, as most existing approaches concentrate only on either defocus or motion blur. To resolve this issue, we construct a synthetic dataset that consists of complex scenes with both types of blur. Experimental results show that our approach effectively detects and classifies blur, outperforming other state-of-the-art methods. Our method can be used for various applications, such as photo editing, blur magnification, and deblurring.Item Directing the Photography: Combining Cinematic Rules, Indirect Light Controls and Lighting-by-Example(The Eurographics Association and John Wiley & Sons Ltd., 2018) Galvane, Quentin; Lino, Christophe; Christie, Marc; Cozot, Rémi; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesThe placement of lights in a 3D scene is a technical and artistic task that requires time and trained skills. Most 3D modelling tools only provide a direct control of light sources, through the manipulation of parameters such as size, location, flux (the perceived power of light) or opening angle (the light frustum). Approaches have been relying on automated or semi-automated techniques to relieve users from such low-level manipulations at the expense of an important computational cost. In this paper, guided by discussions with experts in scene and object lighting, we propose an indirect control of area light sources. We first formalize the classical 3-point lighting design principle (key-light, fill-lights and back/rim-lights) in a parametric model. Given a key-light placed in the scene, we then provide a computational approach to (i) automatically compute the position and size of fill-lights and back/rim-lights by analyzing the geometry of 3D character, and (ii) automatically compute the flux and size of key, fill and back/rim lights, given a sample reference image in a computationally efficient way. Results demonstrate the benefits of the approach on the quick lighting of 3D characters, and further demonstrate the feasibility of interactive control of multiple lights through image features.Item DMAT: Deformable Medial Axis Transform for Animated Mesh Approximation(The Eurographics Association and John Wiley & Sons Ltd., 2018) Yang, Baorong; Yao, Junfeng; Guo, Xiaohu; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesExtracting a faithful and compact representation of an animated surface mesh is an important problem for computer graphics. However, the surface-based methods have limited approximation power for volume preservation when the animated sequences are extremely simplified. In this paper, we introduce Deformable Medial Axis Transform (DMAT), which is deformable medial mesh composed of a set of animated spheres. Starting from extracting an accurate and compact representation of a static MAT as the template and partitioning the vertices on the input surface as the correspondences for each medial primitive, we present a correspondence-based approximation method equipped with an As-Rigid-As-Possible (ARAP) deformation energy defined on medial primitives. As a result, our algorithm produces DMAT with consistent connectivity across the whole sequence, accurately approximating the input animated surfaces.Item FashionGAN: Display your fashion design using Conditional Generative Adversarial Nets(The Eurographics Association and John Wiley & Sons Ltd., 2018) Cui, Yi Rui; Liu, Qi; Gao, Cheng Ying; Su, Zhuo; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesVirtual garment display plays an important role in fashion design for it can directly show the design effect of the garment without having to make a sample garment like traditional clothing industry. In this paper, we propose an end-to-end virtual garment display method based on Conditional Generative Adversarial Networks. Different from existing 3D virtual garment methods which need complex interactions and domain-specific user knowledge, our method only need users to input a desired fashion sketch and a specified fabric image then the image of the virtual garment whose shape and texture are consistent with the input fashion sketch and fabric image can be shown out quickly and automatically. Moreover, it can also be extended to contour images and garment images, which further improves the reuse rate of fashion design. Compared with the existing image-to-image methods, the quality of images generated by our method is better in terms of color and shape.Item Feature Generation for Adaptive Gradient-Domain Path Tracing(The Eurographics Association and John Wiley & Sons Ltd., 2018) Back, Jonghee; Yoon, Sung-Eui; Moon, Bochang; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesIn this paper, we propose a new technique to incorporate recent adaptive rendering approaches built upon local regression theory into a gradient-domain path tracing framework, in order to achieve high-quality rendering results. Our method aims to reduce random artifacts introduced by random sampling on image colors and gradients. Our high-level approach is to identify a feature image from noisy gradients, and pass the image to an existing local regression based adaptive method so that adaptive sampling and reconstruction using our feature can boost the performance of gradient-domain rendering. To fulfill our idea, we derive an ideal feature in the form of image gradients and propose an estimation process for the ideal feature in the presence of noise in image gradients. We demonstrate that our integrated adaptive solution leads to performance improvement for a gradient-domain path tracer, by seamlessly incorporating recent adaptive sampling and reconstruction strategies through our estimated feature.Item Few-shot Learning of Homogeneous Human Locomotion Styles(The Eurographics Association and John Wiley & Sons Ltd., 2018) Mason, Ian; Starke, Sebastian; Zhang, He; Bilen, Hakan; Komura, Taku; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesUsing neural networks for learning motion controllers from motion capture data is becoming popular due to the natural and smooth motions they can produce, the wide range of movements they can learn and their compactness once they are trained. Despite these advantages, these systems require large amounts of motion capture data for each new character or style of motion to be generated, and systems have to undergo lengthy retraining, and often reengineering, to get acceptable results. This can make the use of these systems impractical for animators and designers and solving this issue is an open and rather unexplored problem in computer graphics. In this paper we propose a transfer learning approach for adapting a learned neural network to characters that move in different styles from those on which the original neural network is trained. Given a pretrained character controller in the form of a Phase-Functioned Neural Network for locomotion, our system can quickly adapt the locomotion to novel styles using only a short motion clip as an example. We introduce a canonical polyadic tensor decomposition to reduce the amount of parameters required for learning from each new style, which both reduces the memory burden at runtime and facilitates learning from smaller quantities of data. We show that our system is suitable for learning stylized motions with few clips of motion data and synthesizing smooth motions in real-time.Item Generative Adversarial Image Super-Resolution Through Deep Dense Skip Connections(The Eurographics Association and John Wiley & Sons Ltd., 2018) Zhu, Xiaobin; Li, Zhuangzi; Zhang, Xiaoyu; Li, Haisheng; Xue, Ziyu; Wang, Lei; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesRecently, image super-resolution works based on Convolutional Neural Networks (CNNs) and Generative Adversarial Nets (GANs) have shown promising performance. However, these methods tend to generate blurry and over-smoothed super-resolved (SR) images, due to the incomplete loss function and powerless architectures of networks. In this paper, a novel generative adversarial image super-resolution through deep dense skip connections (GSR-DDNet), is proposed to solve the above-mentioned problems. It aims to take advantage of GAN's ability of modeling data distributions, so that GSR-DDNet can select informative feature representation and model the mapping across the low-quality and high-quality images in an adversarial way. The pipeline of the proposed method consists of three main components: 1) The generator of a novel dense skip connection network with the deep structure for learning robust mapping function is proposed to generate SR images from low-resolution images; 2) The feature extraction network based on VGG-19 is adopted to capture high frequency feature maps for content loss; and 3) The discriminator with Wasserstein distance is adopted to identify the overall style of SR and ground-truth images. Experiments conducted on four publicly available datasets demonstrate the superiority against the state-of-the-art methods.Item GPU-based Polynomial Finite Element Matrix Assembly for Simplex Meshes(The Eurographics Association and John Wiley & Sons Ltd., 2018) Mueller-Roemer, Johannes Sebastian; Stork, André; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesIn this paper, we present a matrix assembly technique for arbitrary polynomial order finite element simulations on simplex meshes for graphics processing units (GPU). Compared to the current state of the art in GPU-based matrix assembly, we avoid the need for an intermediate sparse matrix and perform assembly directly into the final, GPU-optimized data structure. Thereby, we avoid the resulting 180% to 600% memory overhead, depending on polynomial order, and associated allocation time, while simplifying the assembly code and using a more compact mesh representation. We compare our method with existing algorithms and demonstrate significant speedups.Item Instant Stippling on 3D Scenes(The Eurographics Association and John Wiley & Sons Ltd., 2018) Ma, Lei; Guo, Jianwei; Yan, Dong-Ming; Sun, Hanqiu; Chen, Yanyun; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesIn this paper, we present a novel real-time approach to generate high-quality stippling on 3D scenes. The proposed method is built on a precomputed 2D sample sequence called incremental Voronoi set with blue-noise properties. A rejection sampling scheme is then applied to achieve tone reproduction, by thresholding the sample indices proportional to the inverse target tonal value to produce a suitable stipple density. Our approach is suitable for stippling large-scale or even dynamic scenes because the thresholding of individual stipples is trivially parallelizable. In addition, the static nature of the underlying sequence benefits the frame-to-frame coherence of the stippling. Finally, we propose an extension that supports stipples of varying sizes and tonal values, leading to smoother spatial and temporal transitions. Experimental results reveal that the temporal coherence and real-time performance of our approach are superior to those of previous approaches.Item Learning Scene Illumination by Pairwise Photos from Rear and Front Mobile Cameras(The Eurographics Association and John Wiley & Sons Ltd., 2018) Cheng, Dachuan; Shi, Jian; Chen, Yanyun; Deng, Xiaoming; Zhang, Xiaopeng; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesIllumination estimation is an essential problem in computer vision, graphics and augmented reality. In this paper, we propose a learning based method to recover low-frequency scene illumination represented as spherical harmonic (SH) functions by pairwise photos from rear and front cameras on mobile devices. An end-to-end deep convolutional neural network (CNN) structure is designed to process images on symmetric views and predict SH coefficients. We introduce a novel Render Loss to improve the rendering quality of the predicted illumination. A high quality high dynamic range (HDR) panoramic image dataset was developed for training and evaluation. Experiments show that our model produces visually and quantitatively superior results compared to the state-of-the-arts. Moreover, our method is practical for mobile-based applications.Item Light Optimization for Detail Highlighting(The Eurographics Association and John Wiley & Sons Ltd., 2018) Gkaravelis, Anastasios; Papaioannou, Georgios; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesIn this paper we propose an effective technique for the automatic arrangement of spot lights and other luminaires on or near user-provided arbitrary mounting surfaces in order to highlight the geometric details of complex objects. Since potential applications include the lighting design for exhibitions and similar installations, the method takes into account obstructing geometry and potential occlusion from visitors and other non-permanent blocking geometry. Our technique generates the most appropriate position and orientation for light sources based on a local contrast maximization near salient geometric features and a clustering mechanism, producing consistent and view-independent results, with minimal user intervention. We validate our method with realistic test cases including multiple and disjoint exhibits as well as high occlusion scenarios.Item Local and Hierarchical Refinement for Subdivision Gradient Meshes(The Eurographics Association and John Wiley & Sons Ltd., 2018) Verstraaten, Teun W.; Kosinka, Jiri; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesGradient mesh design tools allow users to create detailed scalable images, traditionally through the creation and manipulation of a (dense) mesh with regular rectangular topology. Through recent advances it is now possible to allow gradient meshes to have arbitrary manifold topology, using a modified Catmull-Clark subdivision scheme to define the resultant geometry and colour [LKSD17]. We present two novel methods to allow local and hierarchical refinement of both colour and geometry for such subdivision gradient meshes. Our methods leverage the mesh properties that the particular subdivision scheme ensures. In both methods, the artists enjoy all the standard capabilities of manipulating the mesh and the associated colour gradients at the coarsest level as well as locally at refined levels. Further novel features include interpolation of both position and colour of the vertices of the input meshes, local detail follows coarser-level edits, and support for sharp colour transitions, all at any level in the hierarchy offered by subdivision.Item A New Uniform Format for 360 VR Videos(The Eurographics Association and John Wiley & Sons Ltd., 2018) Guo, Juan; Pei, Qikai K.; Ma, Guilong L.; Liu, Li; Zhang, Xinyu Y.; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesRecent breakthroughs in VR technologies, especially in economic VR headsets and massive smartphones are creating a fastgrowing demand for 3D immersive VR content. 360 VR videos record a surrounding environment in every direction and give users a fully immersive experience. Thanks to a ton of 360 cameras that launched in the past years, 360 video content creation is exploding and 360 VR videos are becoming a new video standard in the digital industry. When ERP and CMP are perhaps the most prevalent projection and packing layout for storing 360 VR videos, they have severe projection distortion, internal discontinuous seams or disadvantages in aspect ratio. We introduce a new format for packing and storing 360 VR videos using two stage mappings. Hemispheres are seamlessly and uniformly mapped onto squares. Two respective squares are stitched to form a rectangle with the aspect ratio 2 : 1. Our approach is able to avoid internal discontinuity and generate uniform pixel distribution, while keeping the aspect ratio close to the majority standard aspect ratio of 16 : 9.