Browsing by Author "Liu, C. Karen"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Hierarchical Planning and Control for Box Loco-Manipulation(ACM Association for Computing Machinery, 2023) Xie, Zhaoming; Tseng, Jonathan; Starke, Sebastian; Panne, Michiel van de; Liu, C. Karen; Wang, Huamin; Ye, Yuting; Victor ZordanHumans perform everyday tasks using a combination of locomotion and manipulation skills. Building a system that can handle both skills is essential to creating virtual humans. We present a physically-simulated human capable of solving box rearrangement tasks, which requires a combination of both skills. We propose a hierarchical control architecture, where each level solves the task at a different level of abstraction, and the result is a physics-based simulated virtual human capable of rearranging boxes in a cluttered environment. The control architecture integrates a planner, diffusion models, and physics-based motion imitation of sparse motion clips using deep reinforcement learning. Boxes can vary in size, weight, shape, and placement height. Code and trained control policies are provided.Item Learning Human Search Behavior from Egocentric Visual Inputs(The Eurographics Association and John Wiley & Sons Ltd., 2021) Sorokin, Maks; Yu, Wenhao; Ha, Sehoon; Liu, C. Karen; Mitra, Niloy and Viola, Ivan''Looking for things'' is a mundane but critical task we repeatedly carry on in our daily life. We introduce a method to develop a human character capable of searching for a randomly located target object in a detailed 3D scene using its locomotion capability and egocentric vision perception represented as RGBD images. By depriving the privileged 3D information from the human character, it is forced to move and look around simultaneously to account for the restricted sensing capability, resulting in natural navigation and search behaviors. Our method consists of two components: 1) a search control policy based on an abstract character model, and 2) an online replanning control module for synthesizing detailed kinematic motion based on the trajectories planned by the search policy. We demonstrate that the combined techniques enable the character to effectively find often occluded household items in indoor environments. The same search policy can be applied to different full body characters without the need of retraining. We evaluate our method quantitatively by testing it on randomly generated scenarios. Our work is a first step toward creating intelligent virtual agents with humanlike behaviors driven by onboard sensors, paving the road toward future robotic applications.Item A Survey on Reinforcement Learning Methods in Character Animation(The Eurographics Association and John Wiley & Sons Ltd., 2022) Kwiatkowski, Ariel; Alvarado, Eduardo; Kalogeiton, Vicky; Liu, C. Karen; Pettré, Julien; Panne, Michiel van de; Cani, Marie-Paule; Meneveaux, Daniel; Patanè, GiuseppeReinforcement Learning is an area of Machine Learning focused on how agents can be trained to make sequential decisions, and achieve a particular goal within an arbitrary environment. While learning, they repeatedly take actions based on their observation of the environment, and receive appropriate rewards which define the objective. This experience is then used to progressively improve the policy controlling the agent's behavior, typically represented by a neural network. This trained module can then be reused for similar problems, which makes this approach promising for the animation of autonomous, yet reactive characters in simulators, video games or virtual reality environments. This paper surveys the modern Deep Reinforcement Learning methods and discusses their possible applications in Character Animation, from skeletal control of a single, physically-based character to navigation controllers for individual agents and virtual crowds. It also describes the practical side of training DRL systems, comparing the different frameworks available to build such agents.