Browsing by Author "Wang, He"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item DARC: A Visual Analytics System for Multivariate Applicant Data Aggregation, Reasoning and Comparison(The Eurographics Association, 2022) Hou, Yihan; Liu, Yu; Wang, He; Zhang, Zhichao; Li, Yue; Liang, Hai-Ning; Yu, Lingyun; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-TakPeople often make decisions based on their comprehensive understanding of various materials, judgement of reasons, and comparison among choices. For instance, when hiring committees review multivariate applicant data, they need to consider and compare different aspects of the applicants' materials. However, the amount and complexity of multivariate data increase the difficulty to analyze the data, extract the most salient information, and then rapidly form opinions based on the extracted information. Thus, a fast and comprehensive understanding of multivariate data sets is a pressing need in many fields, such as business and education. In this work, we had in-depth interviews with stakeholders and characterized user requirements involved in data-driven decision making in reviewing school applications. Based on these requirements, we propose DARC, a visual analytics system for facilitating decision making on multivariate applicant data. Through the system, users are supported to gain insights of the multivariate data, picture an overview of all data cases, and retrieve original data in a quick and intuitive manner. The effectiveness of DARC is validated through observational user evaluations and interviews.Item Learning a Generative Model for Multi-Step Human-Object Interactions from Videos(The Eurographics Association and John Wiley & Sons Ltd., 2019) Wang, He; Pirk, Sören; Yumer, Ersin; Kim, Vladimir; Sener, Ozan; Sridhar, Srinath; Guibas, Leonidas; Alliez, Pierre and Pellacini, FabioCreating dynamic virtual environments consisting of humans interacting with objects is a fundamental problem in computer graphics. While it is well-accepted that agent interactions play an essential role in synthesizing such scenes, most extant techniques exclusively focus on static scenes, leaving the dynamic component out. In this paper, we present a generative model to synthesize plausible multi-step dynamic human-object interactions. Generating multi-step interactions is challenging since the space of such interactions is exponential in the number of objects, activities, and time steps. We propose to handle this combinatorial complexity by learning a lower dimensional space of plausible human-object interactions. We use action plots to represent interactions as a sequence of discrete actions along with the participating objects and their states. To build action plots, we present an automatic method that uses state-of-the-art computer vision techniques on RGB videos in order to detect individual objects and their states, extract the involved hands, and recognize the actions performed. The action plots are built from observing videos of everyday activities and are used to train a generative model based on a Recurrent Neural Network (RNN). The network learns the causal dependencies and constraints between individual actions and can be used to generate novel and diverse multi-step human-object interactions. Our representation and generative model allows new capabilities in a variety of applications such as interaction prediction, animation synthesis, and motion planning for a real robotic agent.Item Model‐based Crowd Behaviours in Human‐solution Space(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Xiang, Wei; Wang, He; Zhang, Yuqing; Yip, Milo K.; Jin, Xiaogang; Hauser, Helwig and Alliez, PierreRealistic crowd simulation has been pursued for decades, but it still necessitates tedious human labour and a lot of trial and error. The majority of currently used crowd modelling is either empirical (model‐based) or data‐driven (model‐free). Model‐based methods cannot fit observed data precisely, whereas model‐free methods are limited by the availability/quality of data and are uninterpretable. In this paper, we aim at taking advantage of both model‐based and data‐driven approaches. In order to accomplish this, we propose a new simulation framework built on a physics‐based model that is designed to be data‐friendly. Both the general prior knowledge about crowds encoded by the physics‐based model and the specific real‐world crowd data at hand jointly influence the system dynamics. With a multi‐granularity physics‐based model, the framework combines microscopic and macroscopic motion control. Each simulation step is formulated as an energy optimization problem, where the minimizer is the desired crowd behaviour. In contrast to traditional optimization‐based methods which seek the theoretical minimizer, we designed an acceleration‐aware data‐driven scheme to compute the minimizer from real‐world data in order to achieve higher realism by parameterizing both velocity and acceleration. Experiments demonstrate that our method can produce crowd animations that are more realistically behaved in a variety of scales and scenarios when compared to the earlier methods.