Browsing by Author "Xu, Kai"
Now showing 1 - 9 of 9
Results Per Page
Sort Options
Item Active Scene Understanding via Online Semantic Reconstruction(The Eurographics Association and John Wiley & Sons Ltd., 2019) Zheng, Lintao; Zhu, Chenyang; Zhang, Jiazhao; Zhao, Hang; Huang, Hui; Niessner, Matthias; Xu, Kai; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonWe propose a novel approach to robot-operated active understanding of unknown indoor scenes, based on online RGBD reconstruction with semantic segmentation. In our method, the exploratory robot scanning is both driven by and targeting at the recognition and segmentation of semantic objects from the scene. Our algorithm is built on top of a volumetric depth fusion framework and performs real-time voxel-based semantic labeling over the online reconstructed volume. The robot is guided by an online estimated discrete viewing score field (VSF) parameterized over the 3D space of 2D location and azimuth rotation. VSF stores for each grid the score of the corresponding view, which measures how much it reduces the uncertainty (entropy) of both geometric reconstruction and semantic labeling. Based on VSF, we select the next best views (NBV) as the target for each time step. We then jointly optimize the traverse path and camera trajectory between two adjacent NBVs, through maximizing the integral viewing score (information gain) along path and trajectory. Through extensive evaluation, we show that our method achieves efficient and accurate online scene parsing during exploratory scanning.Item CGVC 2020: Frontmatter(The Eurographics Association, 2020) Ritsos, Panagiotis D.; Xu, Kai; Ritsos, Panagiotis D. and Xu, KaiItem CGVC 2021: Frontmatter(The Eurographics Association, 2021) Xu, Kai; Turner, Martin; Xu, Kai and Turner, MartinItem Data‐Driven Shape Analysis and Processing(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Xu, Kai; Kim, Vladimir G.; Huang, Qixing; Kalogerakis, Evangelos; Chen, Min and Zhang, Hao (Richard)Data‐driven methods serve an increasingly important role in discovering geometric, structural and semantic relationships between shapes. In contrast to traditional approaches that process shapes in isolation of each other, data‐driven methods aggregate information from 3D model collections to improve the analysis, modelling and editing of shapes. Data‐driven methods are also able to learn computational models that reason about properties and relationships of shapes without relying on hard‐coded rules or explicitly programmed instructions. Through reviewing the literature, we provide an overview of the main concepts and components of these methods, as well as discuss their application to classification, segmentation, matching, reconstruction, modelling and exploration, as well as scene analysis and synthesis. We conclude our report with ideas that can inspire future research in data‐driven shape analysis and processing.Data‐driven methods serve an increasingly important role in discovering geometric, structural and semantic relationships between shapes. In contrast to traditional approaches that process shapes in isolation of each other, data‐driven methods aggregate information from 3D model collections to improve the analysis, modelling and editing of shapes. Data‐driven methods are also able to learn computational models that reason about properties and relationships of shapes without relying on hard‐coded rules or explicitly programmed instructions. Through reviewing the literature, we provide an overview of the main concepts and components of these methods, as well as discuss their application to classification, segmentation, matching, reconstruction, modelling and exploration, as well as scene analysis and synthesis. We conclude our report with ideas that can inspire future research in data‐driven shape analysis and processing.Item Learning Generative Models of 3D Structures(The Eurographics Association and John Wiley & Sons Ltd., 2020) Chaudhuri, Siddhartha; Ritchie, Daniel; Wu, Jiajun; Xu, Kai; Zhang, Hao; Mantiuk, Rafal and Sundstedt, Veronica3D models of objects and scenes are critical to many academic disciplines and industrial applications. Of particular interest is the emerging opportunity for 3D graphics to serve artificial intelligence: computer vision systems can benefit from syntheticallygenerated training data rendered from virtual 3D scenes, and robots can be trained to navigate in and interact with real-world environments by first acquiring skills in simulated ones. One of the most promising ways to achieve this is by learning and applying generative models of 3D content: computer programs that can synthesize new 3D shapes and scenes. To allow users to edit and manipulate the synthesized 3D content to achieve their goals, the generative model should also be structure-aware: it should express 3D shapes and scenes using abstractions that allow manipulation of their high-level structure. This state-of-theart report surveys historical work and recent progress on learning structure-aware generative models of 3D shapes and scenes. We present fundamental representations of 3D shape and scene geometry and structures, describe prominent methodologies including probabilistic models, deep generative models, program synthesis, and neural networks for structured data, and cover many recent methods for structure-aware synthesis of 3D shapes and indoor scenes.Item Learning Generative Models of 3D Structures(The Eurographics Association, 2019) Chaudhuri, Siddhartha; Ritchie, Daniel; Xu, Kai; Zhang, Hao (Richard); Jakob, Wenzel and Puppo, EnricoMany important applications demand 3D content, yet 3D modeling is a notoriously difficult and inaccessible activity. This tutorial provides a crash course in one of the most promising approaches for democratizing 3D modeling: learning generative models of 3D structures. Such generative models typically describe a statistical distribution over a space of possible 3D shapes or 3D scenes, as well as a procedure for sampling new shapes or scenes from the distribution. To be useful by non-experts for design purposes, a generative model must represent 3D content at a high level of abstraction in which the user can express their goals-that is, it must be structure-aware. In this tutorial, we will take a deep dive into the most exciting methods for building generative models of both individual shapes as well as composite scenes, highlighting how standard data-driven methods need to be adapted, or new methods developed, to create models that are both generative and structure-aware. The tutorial assumes knowledge of the fundamentals of computer graphics, linear algebra, and probability, though a quick refresher of important algorithmic ideas from geometric analysis and machine learning is included. Attendees should come away from this tutorial with a broad understanding of the historical and current work in generative 3D modeling, as well as familiarity with the mathematical tools needed to start their own research or product development in this area.Item Survey on the Analysis of User Interactions and Visualization Provenance(The Eurographics Association and John Wiley & Sons Ltd., 2020) Xu, Kai; Ottley, Alvitta; Walchshofer, Conny; Streit, Marc; Chang, Remco; Wenskovitch, John; Smit, Noeska and Oeltze-Jafra, Steffen and Wang, BeiThere is fast-growing literature on provenance-related research, covering aspects such as its theoretical framework, use cases, and techniques for capturing, visualizing, and analyzing provenance data. As a result, there is an increasing need to identify and taxonomize the existing scholarship. Such an organization of the research landscape will provide a complete picture of the current state of inquiry and identify knowledge gaps or possible avenues for further investigation. In this STAR, we aim to produce a comprehensive survey of work in the data visualization and visual analytics field that focus on the analysis of user interaction and provenance data. We structure our survey around three primary questions: (1) WHY analyze provenance data, (2) WHAT provenance data to encode and how to encode it, and (3) HOW to analyze provenance data. A concluding discussion provides evidence-based guidelines and highlights concrete opportunities for future development in this emerging area.Item Visual Analytics of Contact Tracing Policy Simulations During an Emergency Response(The Eurographics Association and John Wiley & Sons Ltd., 2022) Sondag, Max; Turkay, Cagatay; Xu, Kai; Matthews, Louise; Mohr, Sibylle; Archambault, Daniel; ; Borgo, Rita; Marai, G. Elisabeta; Schreck, TobiasEpidemiologists use individual-based models to (a) simulate disease spread over dynamic contact networks and (b) to investigate strategies to control the outbreak. These model simulations generate complex 'infection maps' of time-varying transmission trees and patterns of spread. Conventional statistical analysis of outputs offers only limited interpretation. This paper presents a novel visual analytics approach for the inspection of infection maps along with their associated metadata, developed collaboratively over 16 months in an evolving emergency response situation. We introduce the concept of representative trees that summarize the many components of a time-varying infection map while preserving the epidemiological characteristics of each individual transmission tree. We also present interactive visualization techniques for the quick assessment of different control policies. Through a series of case studies and a qualitative evaluation by epidemiologists, we demonstrate how our visualizations can help improve the development of epidemiological models and help interpret complex transmission patterns.Item Weakly Supervised Part-wise 3D Shape Reconstruction from Single-View RGB Images(The Eurographics Association and John Wiley & Sons Ltd., 2020) Niu, Chengjie; Yu, Yang; Bian, Zhenwei; Li, Jun; Xu, Kai; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueIn order for the deep learning models to truly understand the 2D images for 3D geometry recovery, we argue that singleview reconstruction should be learned in a part-aware and weakly supervised manner. Such models lead to more profound interpretation of 2D images in which part-based parsing and assembling are involved. To this end, we learn a deep neural network which takes a single-view RGB image as input, and outputs a 3D shape in parts represented by 3D point clouds with an array of 3D part generators. In particular, we devise two levels of generative adversarial network (GAN) to generate shapes with both correct part shape and reasonable overall structure. To enable a self-taught network training, we devise a differentiable projection module along with a self-projection loss measuring the error between the shape projection and the input image. The training data in our method is unpaired between the 2D images and the 3D shapes with part decomposition. Through qualitative and quantitative evaluations on public datasets, we show that our method achieves good performance in part-wise single-view reconstruction.