A Mutual Motion Capture System for Face-to-face Collaboration
dc.contributor.author | Nakamura, Atsuyuki | en_US |
dc.contributor.author | Kiyokawa, Kiyoshi | en_US |
dc.contributor.author | Ratsamee, Photchara | en_US |
dc.contributor.author | Mashita, Tomohiro | en_US |
dc.contributor.author | Uranishi, Yuki | en_US |
dc.contributor.author | Takemura, Haruo | en_US |
dc.contributor.editor | Robert W. Lindeman and Gerd Bruder and Daisuke Iwai | en_US |
dc.date.accessioned | 2017-11-21T15:42:36Z | |
dc.date.available | 2017-11-21T15:42:36Z | |
dc.date.issued | 2017 | |
dc.description.abstract | In recent years, motion capture technology to measure the movement of the body has been used in many fields. Moreover, motion capture targeting multiple people is becoming necessary in multi-user virtual reality (VR) and augmented reality (AR) environments. It is desirable that motion capture requires no wearable devices to capture natural motion easily. Some systems require no wearable devices using an RGB-D camera fixed in the environment, but the user has to stay in front of the fixed the RGB-D camera. Therefore, in this research, proposed is a motion capture technique for a multi-user VR / AR environment using head mounted displays (HMDs), that does not limit the working range of the user nor require any wearable devices. In the proposed technique, an RGB-D camera is attached to each HMD and motion capture is carried out mutually. The motion capture accuracy is improved by modifying the depth image. A prototype system has been implemented to evaluate the effectiveness of the proposed method and motion capture accuracy has been compared with two conditions, with and without depth information correction while rotating the RGB-D camera. As a result, it was confirmed that the proposed method could decrease the number of frames with erroneous motion capture by 49% to 100% in comparison with the case without depth image conversion. | en_US |
dc.description.sectionheaders | Tracking | |
dc.description.seriesinformation | ICAT-EGVE 2017 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments | |
dc.identifier.doi | 10.2312/egve.20171332 | |
dc.identifier.isbn | 978-3-03868-038-3 | |
dc.identifier.issn | 1727-530X | |
dc.identifier.pages | 9-16 | |
dc.identifier.uri | https://doi.org/10.2312/egve.20171332 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/egve20171332 | |
dc.publisher | The Eurographics Association | en_US |
dc.subject | Human | |
dc.subject | centered computing | |
dc.subject | Mixed/augmented reality | |
dc.subject | Virtual reality | |
dc.subject | Collaborative interaction | |
dc.title | A Mutual Motion Capture System for Face-to-face Collaboration | en_US |