FaceDrive: Facial Expression Driven Operation to Control Virtual Supernumerary Robotic Arms
dc.contributor.author | Fukuoka, Masaaki | en_US |
dc.contributor.author | verhulst, adrien | en_US |
dc.contributor.author | Nakamura, Fumihiko | en_US |
dc.contributor.author | Takizawa, Ryo | en_US |
dc.contributor.author | Masai, Katsutoshi | en_US |
dc.contributor.author | Sugimoto, Maki | en_US |
dc.contributor.editor | Kakehi, Yasuaki and Hiyama, Atsushi | en_US |
dc.date.accessioned | 2019-09-11T05:43:08Z | |
dc.date.available | 2019-09-11T05:43:08Z | |
dc.date.issued | 2019 | |
dc.description.abstract | Supernumerary Robotic Limbs (SRLs) can make physical activities easier, but require cooperation with the operator. To improve cooperation between the SRLs and the operator, the SRLs can try to predict the operator's intentions. A way to predict the operator's intentions is to use his/her Facial Expressions (FEs). Here we investigate the mapping between FEs and Supernumerary Robotic Arms (SRAs) commands (e.g. grab, release). To measure FEs, we used a optical sensor-based approach (here inside a HMD). The sensors data are fed to a SVM able to predict FEs. The SRAs can then carry out commands by predicting the operator's FEs (and arguably, the operator's intention). We ran a data collection study (N=10) to know which FEs assign to which robotic arm commands in a Virtual reality Environment (VE). We researched the mapping patterns by (1) performing an object reaching - grasping - releasing task using ''any'' FEs; (2) analyzing sensors data and a self-reported FE questionnaire to find the most common FEs used for a given command; (3) classifying the FEs in FEs groups. We then ran another study (N=14) to find the most effective combination of FEs groups / SRAs commands by recording task completion time. As a result, we found that the optimum combinations are: (i) Eyes + Mouth for grabbing / releasing; and (ii) Mouth for extending / contracting the arms (i.e. a along the forward axis). | en_US |
dc.description.sectionheaders | Sensing and Interaction | |
dc.description.seriesinformation | ICAT-EGVE 2019 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments | |
dc.identifier.doi | 10.2312/egve.20191275 | |
dc.identifier.isbn | 978-3-03868-083-3 | |
dc.identifier.issn | 1727-530X | |
dc.identifier.pages | 17-24 | |
dc.identifier.uri | https://doi.org/10.2312/egve.20191275 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/egve20191275 | |
dc.publisher | The Eurographics Association | en_US |
dc.subject | Computer systems organization | |
dc.subject | External interfaces for robotics | |
dc.subject | Real | |
dc.subject | time operating systems | |
dc.subject | Software and its engineering | |
dc.subject | Virtual worlds training simulations | |
dc.title | FaceDrive: Facial Expression Driven Operation to Control Virtual Supernumerary Robotic Arms | en_US |
Files
Original bundle
1 - 1 of 1