Browsing by Author "Wang, Changbo"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Deep Video-Based Performance Synthesis from Sparse Multi-View Capture(The Eurographics Association and John Wiley & Sons Ltd., 2019) Chen, Mingjia; Wang, Changbo; Liu, Ligang; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonWe present a deep learning based technique that enables novel-view videos of human performances to be synthesized from sparse multi-view captures. While performance capturing from a sparse set of videos has received significant attention, there has been relatively less progress which is about non-rigid objects (e.g., human bodies). The rich articulation modes of human body make it rather challenging to synthesize and interpolate the model well. To address this problem, we propose a novel deep learning based framework that directly predicts novel-view videos of human performances without explicit 3D reconstruction. Our method is a composition of two steps: novel-view prediction and detail enhancement. We first learn a novel deep generative query network for view prediction. We synthesize novel-view performances from a sparse set of just five or less camera videos. Then, we use a new generative adversarial network to enhance fine-scale details of the first step results. This opens up the possibility of high-quality low-cost video-based performance synthesis, which is gaining popularity for VA and AR applications. We demonstrate a variety of promising results, where our method is able to synthesis more robust and accurate performances than existing state-of-the-art approaches when only sparse views are available.Item Exploring Contextual Relationships in 3D Cloud Points by Semantic Knowledge Mining(The Eurographics Association and John Wiley & Sons Ltd., 2022) Chen, Lianggangxu; Lu, Jiale; Cai, Yiqing; Wang, Changbo; He, Gaoqi; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne3D scene graph generation (SGG) aims to predict the class of objects and predicates simultaneously in one 3D point cloud scene with instance segmentation. Since the underlying semantic of 3D point clouds is spatial information, recent ideas of the 3D SGG task usually face difficulties in understanding global contextual semantic relationships and neglect the intrinsic 3D visual structures. To build the global scope of semantic relationships, we first propose two types of Semantic Clue (SC) from entity level and path level, respectively. SC can be extracted from the training set and modeled as the co-occurrence probability between entities. Then a novel Semantic Clue aware Graph Convolution Network (SC-GCN) is designed to explicitly model each SC of which the message is passed in their specific neighbor pattern. For constructing the interactions between the 3D visual and semantic modalities, a visual-language transformer (VLT) module is proposed to jointly learn the correlation between 3D visual features and class label embeddings. Systematic experiments on the 3D semantic scene graph (3DSSG) dataset show that our full method achieves state-of-the-art performance.Item A Novel Plastic Phase-Field Method for Ductile Fracture with GPU Optimization(The Eurographics Association and John Wiley & Sons Ltd., 2020) Zhao, Zipeng; Huang, Kemeng; Li, Chen; Wang, Changbo; Qin, Hong; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lueefficiently simulate ductile fracture with GPU optimization. At the theoretical level of physically-based modeling and simulation, our PPF approach assumes the fracture sensitivity of the material increases with the plastic strain accumulation. As a result, we first develop a hardening-related fracture toughness function towards phase-field evolution. Second, we follow the associative flow rule and adopt a novel degraded von Mises yield criterion. In this way, we establish the tight coupling of the phase-field and plastic treatment, with which our PPF method can present distinct elastoplasticity, necking, and fracture characteristics during ductile fracture simulation. At the numerical level towards GPU optimization, we further devise an advanced parallel framework, which takes the full advantages of hierarchical architecture. Our strategy dramatically enhances the computational efficiency of preprocessing and phase-field evolution for our PPF with the material point method (MPM). Based on our extensive experiments on a variety of benchmarks, our novel method's performance gain can reach 1.56x speedup of the primary GPU MPM. Finally, our comprehensive simulation results have confirmed that this new PPF method can efficiently and realistically simulate complex ductile fracture phenomena in 3D interactive graphics and animation.Item A Rapid, End‐to‐end, Generative Model for Gaseous Phenomena from Limited Views(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Qiu, Sheng; Li, Chen; Wang, Changbo; Qin, Hong; Benes, Bedrich and Hauser, HelwigDespite the rapid development and proliferation of computer graphics hardware devices for scene capture in the most recent decade, the high‐resolution 3D/4D acquisition of gaseous scenes (e.g., smokes) in real time remains technically challenging in graphics research nowadays. In this paper, we explore a hybrid approach to simultaneously taking advantage of both the model‐centric method and the data‐driven method. Specifically, this paper develops a novel conditional generative model to rapidly reconstruct the temporal density and velocity fields of gaseous phenomena based on the sequence of two projection views. With the data‐driven method, we can achieve the strong coupling of density update and the estimation of flow motion, as a result, we can greatly improve the reconstruction performance for smoke scenes. First, we employ a conditional generative network to generate the initial density field from input projection views and estimate the flow motion based on the adjacent frames. Second, we utilize the differentiable advection layer and design a velocity estimation network with the long‐term mechanism to help achieve the end‐to‐end training and more stable graphics effects. Third, we can re‐simulate the input scene with flexible coupling effects based on the estimated velocity field subject to artists' guidance or user interaction. Moreover, our generative model could accommodate single projection view as input. In practice, more input projection views are enabling and facilitating the high‐fidelity reconstruction with more realistic and finer details. We have conducted extensive experiments to confirm the effectiveness, efficiency, and robustness of our new method compared with the previous state‐of‐the‐art techniques.Item Translucent Image Recoloring through Homography Estimation(The Eurographics Association and John Wiley & Sons Ltd., 2018) Huang, Yifei; Wang, Changbo; Li, Chenhui; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesImage color editing techniques are of great significance for users who wish to adjust the image color. However, previous works paid less attention to the translucent images. In this paper, we propose a new method to recolor the translucent images while preserving detailed information and color relationships of the source image. We consider the recolor problem as a location transformation problem and solve it in two steps: automatic palette extraction and homography estimation. First, we propose the Hmeans method to extract the dominant colors of the source image based on histogram statistics and clustering. Then, we propose homography estimation to map the source colors to desired colors in the CIE-LAB color space. Further, we adopt a non-linear optimization approach to improve the result generated by the last step. The proposed method maintains high fidelity of the source image. Experiments have shown that our method generates a state-of-the-art visual result, in particular in the shadow areas. The source images with ground truth generated by a ray tracer further verify the effectiveness of our method.