Selecting Moving Targets in AR using Head Orientation
dc.contributor.author | Matsumoto, Keigo | en_US |
dc.contributor.author | Muta, Masahumi | en_US |
dc.contributor.author | Cheng, Kelvin | en_US |
dc.contributor.author | Masuko, Soh | en_US |
dc.contributor.editor | Tony Huang and Arindam Dey | en_US |
dc.date.accessioned | 2017-11-21T15:42:14Z | |
dc.date.available | 2017-11-21T15:42:14Z | |
dc.date.issued | 2017 | |
dc.description.abstract | Along with the spread of augmented reality (AR) using head-mounted display or smart glass, attempts have been made to present information by superimposing information on people and things. In general, people are always moving about and usually do not stay stationary, so it is conceivable that the superimposed AR information also moves with them. However, it is often difficult to follow and select moving targets.We propose two novel techniques, TagToPlace and TagAlong, which help users select moving targets using head orientation. We conducted a user study to compare our proposed techniques to a conventional gaze selection method - DwellTime. The results showed that our proposed techniques are superior to a conventional one in terms of throughput when selecting moving targets. | en_US |
dc.description.sectionheaders | Posters B | |
dc.description.seriesinformation | ICAT-EGVE 2017 - Posters and Demos | |
dc.identifier.doi | 10.2312/egve.20171374 | |
dc.identifier.isbn | 978-3-03868-052-9 | |
dc.identifier.pages | 21-22 | |
dc.identifier.uri | https://doi.org/10.2312/egve.20171374 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/egve20171374 | |
dc.publisher | The Eurographics Association | en_US |
dc.subject | H.5.1 [INFORMATION INTERFACES AND PRESENTATION (e.g. | |
dc.subject | HCI)] | |
dc.subject | Multimedia Information Systems | |
dc.subject | Artificial | |
dc.subject | augmented | |
dc.subject | and virtual realities | |
dc.title | Selecting Moving Targets in AR using Head Orientation | en_US |
Files
Original bundle
1 - 1 of 1