ICAT-EGVE2019
Permanent URI for this collection
Browse
Browsing ICAT-EGVE2019 by Subject "Computer systems organization"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item FaceDrive: Facial Expression Driven Operation to Control Virtual Supernumerary Robotic Arms(The Eurographics Association, 2019) Fukuoka, Masaaki; verhulst, adrien; Nakamura, Fumihiko; Takizawa, Ryo; Masai, Katsutoshi; Sugimoto, Maki; Kakehi, Yasuaki and Hiyama, AtsushiSupernumerary Robotic Limbs (SRLs) can make physical activities easier, but require cooperation with the operator. To improve cooperation between the SRLs and the operator, the SRLs can try to predict the operator's intentions. A way to predict the operator's intentions is to use his/her Facial Expressions (FEs). Here we investigate the mapping between FEs and Supernumerary Robotic Arms (SRAs) commands (e.g. grab, release). To measure FEs, we used a optical sensor-based approach (here inside a HMD). The sensors data are fed to a SVM able to predict FEs. The SRAs can then carry out commands by predicting the operator's FEs (and arguably, the operator's intention). We ran a data collection study (N=10) to know which FEs assign to which robotic arm commands in a Virtual reality Environment (VE). We researched the mapping patterns by (1) performing an object reaching - grasping - releasing task using ''any'' FEs; (2) analyzing sensors data and a self-reported FE questionnaire to find the most common FEs used for a given command; (3) classifying the FEs in FEs groups. We then ran another study (N=14) to find the most effective combination of FEs groups / SRAs commands by recording task completion time. As a result, we found that the optimum combinations are: (i) Eyes + Mouth for grabbing / releasing; and (ii) Mouth for extending / contracting the arms (i.e. a along the forward axis).Item Real Time Remapping of a Third Arm in Virtual Reality(The Eurographics Association, 2019) Drogemuller, Adam; verhulst, adrien; Volmer, Benjamin; Thomas, Bruce; Inami, Masahiko; Sugimoto, Maki; Kakehi, Yasuaki and Hiyama, AtsushiWe present an initial study investigating the usability of a system for users to use their own limbs (here the left arm, right arm, left leg, right leg and head) to remap and control a virtual third arm. The remapping was done by: pre-selecting the limb by gazing over it, then selecting it by voice activation (here we asked the participants to say ''switch''). The system was evaluated in Virtual Reality (VR), where we recorded the performance of participants (N=12, within-group design) in 2 box collection tasks. We found that participants self-reported: (i) significantly less body ownership in switching limbs than in not switching limbs; and (ii) less effort in switching limbs than not switching limbs. In addition, we found that dominant limbs do not significantly affect remap decisions in controlling the third arm.