Learning Human Search Behavior from Egocentric Visual Inputs
dc.contributor.author | Sorokin, Maks | en_US |
dc.contributor.author | Yu, Wenhao | en_US |
dc.contributor.author | Ha, Sehoon | en_US |
dc.contributor.author | Liu, C. Karen | en_US |
dc.contributor.editor | Mitra, Niloy and Viola, Ivan | en_US |
dc.date.accessioned | 2021-04-09T08:01:02Z | |
dc.date.available | 2021-04-09T08:01:02Z | |
dc.date.issued | 2021 | |
dc.description.abstract | ''Looking for things'' is a mundane but critical task we repeatedly carry on in our daily life. We introduce a method to develop a human character capable of searching for a randomly located target object in a detailed 3D scene using its locomotion capability and egocentric vision perception represented as RGBD images. By depriving the privileged 3D information from the human character, it is forced to move and look around simultaneously to account for the restricted sensing capability, resulting in natural navigation and search behaviors. Our method consists of two components: 1) a search control policy based on an abstract character model, and 2) an online replanning control module for synthesizing detailed kinematic motion based on the trajectories planned by the search policy. We demonstrate that the combined techniques enable the character to effectively find often occluded household items in indoor environments. The same search policy can be applied to different full body characters without the need of retraining. We evaluate our method quantitatively by testing it on randomly generated scenarios. Our work is a first step toward creating intelligent virtual agents with humanlike behaviors driven by onboard sensors, paving the road toward future robotic applications. | en_US |
dc.description.number | 2 | |
dc.description.sectionheaders | Learning from Human Motion | |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.volume | 40 | |
dc.identifier.doi | 10.1111/cgf.142641 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.pages | 389-398 | |
dc.identifier.uri | https://doi.org/10.1111/cgf.142641 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf142641 | |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.subject | Computing methodologies | |
dc.subject | Procedural animation | |
dc.subject | Motion processing | |
dc.title | Learning Human Search Behavior from Egocentric Visual Inputs | en_US |
Files
Original bundle
1 - 1 of 1