ICAT-EGVE2018
Permanent URI for this collection
Browse
Browsing ICAT-EGVE2018 by Subject "Computing methodologies"
Now showing 1 - 11 of 11
Results Per Page
Sort Options
Item Adaptive Filtering of Physical-Virtual Artifacts for Synthetic Animatronics(The Eurographics Association, 2018) Schubert, Ryan; Bruder, Gerd; Welch, Gregory; Bruder, Gerd and Yoshimoto, Shunsuke and Cobb, SueSpatial Augmented Reality (SAR), e.g., based on monoscopic projected imagery on physical three-dimensional (3D) surfaces, can be particularly well-suited for ad hoc group or multi-user augmented reality experiences since it does not encumber users with head-worn or carried devices. However, conveying a notion of realistic 3D shapes and movements on SAR surfaces using monoscopic imagery is a difficult challenge. While previous work focused on physical actuation of such surfaces to achieve geometrically dynamic content, we introduce a different concept, which we call ''Synthetic Animatronics,'' i.e., conveying geometric movement or deformation purely through manipulation of the imagery being shown on a static display surface. We present a model for the distribution of the viewpoint-dependent distortion that occurs when there are discrepancies between the physical display surface and the virtual object being represented, and describe a realtime implementation for a method of adaptively filtering the imagery based on an approximation of expected potential error. Finally, we describe an existing physical SAR setup well-suited for synthetic animatronics and a corresponding Unity-based SAR simulator allowing for flexible exploration and validation of the technique and various parameters.Item Blowing in the Wind: Increasing Copresence with a Virtual Human via Airflow Influence in Augmented Reality(The Eurographics Association, 2018) Kim, Kangsoo; Bruder, Gerd; Welch, Gregory; Bruder, Gerd and Yoshimoto, Shunsuke and Cobb, SueIn a social context where two or more interlocutors interact with each other in the same space, one's sense of copresence with the others is an important factor for the quality of communication and engagement in the interaction. Although augmented reality (AR) technology enables the superposition of virtual humans (VHs) as interlocutors in the real world, the resulting sense of copresence is usually far lower than with a real human interlocutor. In this paper, we describe a human-subject study in which we explored and investigated the effects that subtle multi-modal interaction between the virtual environment and the real world, where a VH and human participants were co-located, can have on copresence. We compared two levels of gradually increased multi-modal interaction: (i) virtual objects being affected by real airflow as commonly experienced with fans in summer, and (ii) a VH showing awareness of this airflow. We chose airflow as one example of an environmental factor that can noticeably affect both the real and virtual worlds, and also cause subtle responses in interlocutors.We hypothesized that our two levels of treatment would increase the sense of being together with the VH gradually, i.e., participants would report higher copresence with airflow influence than without it, and the copresence would be even higher when the VH shows awareness of the airflow. The statistical analysis with the participant-reported copresence scores showed that there was an improvement of the perceived copresence with the VH when both the physical-virtual interactivity via airflow and the VH's awareness behaviors were present together. As the considered environmental factors are directed at the VH, i.e., they are not part of the direct interaction with the real human, they can provide a reasonably generalizable approach to support copresence in AR beyond the particular use case in the present experiment.Item BuzzwireVR: An Immersive Game to Supplement Fine-Motor Movement Therapy(The Eurographics Association, 2018) Christou, Chris G.; Michael-Grigoriou, Despina; Sokratous, D.; Tsiakoulia, M.; Bruder, Gerd and Yoshimoto, Shunsuke and Cobb, SueRecovery of upper-body fine-motor skills after brain trauma, e.g. after a stroke, involves a long process of movement rehabilitation. When the arms and hands are affected patients often spend many hours exercising in order to regain control of their movements, often using children's toys. This paper describes the process of development of a Virtual Reality (VR) system designed to supplement rehabilitation by encouraging hand movements while playing a fun game. The system is based on the well-known Buzzwire children's toy that requires steady hand-eye coordination to pass a ring along a wire without touching the wire. The toy has in the past been used in a variety of research studies, but we considered it ideal for motor rehabilitation because it requires steady hand and finger movements. In our virtualised version of the toy the wire consists of a parametric spline curve with cylindrical cross-section positioned in front of the player. Cylinders at the ends of the 'wire' change colour to indicate which hand to use. The parametric nature of the wire allows us to record performance variables which are not readily available in the physical version. We report on two initial experiments which tested and evaluated various aspects of performance on able-bodied participants and stroke patients, followed by a description of how we developed the toy into a multi-level game that encourages increasingly intricate hand movements. In the first evaluation we tested if performance variables (such as average speed, and distance from the wire) could distinguish between dominant and non-dominant hands of able-bodied participants. We also compared performance with and without binocular viewing. Results showed that our metrics could distinguish between the players dominant versus non-dominant hand. We also noted a dramatic disruption of performance when binocular stereopsis was not available. The second experiment was a usability study involving a sample of stroke-affected participants with post-stroke hemiparesis. Results showed positive acceptance of the technology with no fatigue or nausea. Our gamified version of the task utilizes learnings from the previous studies to create an enjoyable multi-level game involving auditory guidance as feedback. Results are discussed in terms of potential benefits of using such technology in addition to conventional therapy.Item Compression Of 16K Video For Mobile VR Playback Over 4K Streams(The Eurographics Association, 2018) Vazquez, Iker; Cutchin, Steve; Bruder, Gerd and Yoshimoto, Shunsuke and Cobb, SueMobile virtual reality headset devices are currently constrained to playing back 4K video streams for hardware, network, and performance reasons. This strongly limits the quality of 360 degrees videos over 4K streams; which in turn translates to insufficient resolution for virtual reality video playback. Spherical stereo virtual reality videos can be currently captured at 8K and 16K resolutions, with 8K being the minimal resolution for an acceptable quality video playback experience. In this paper, we present a novel technique that uses object tracking to compress 16K spherical stereo videos captured by a still camera into a format that can be streamed over 4K channels while maintaining the 16K video resolution for typical video captures.Item Feasibility Study of an Augmented Reality System for People with Dementia(The Eurographics Association, 2018) Andrade Ferreira, Luis Duarte; Cavaco, Sofia; Bermúdez i Badia, Sergi; Bruder, Gerd and Yoshimoto, Shunsuke and Cobb, SueWhile augmented reality (AR) can be valuable in therapy with people with dementia (PwD), when designing an AR system for PwD, it is important to understand how PwD interact with to such systems. Here we discuss an experiment that aims to study how PwD can complete a set of activities using a variety of human - computer interaction techniques in an AR environment. During our analysis, we will answer 4 research questions: (RQ1) How autonomous are PwD while using the proposed system? (RQ2) How engaging is the system? (RQ3) How proficient are PwD in doing the proposed activities using errorful and errorless approaches? (RQ4) How useful is the proposed system as perceived by therapists? There were 7 people diagnosed with dementia participating in the study. We also invited 3 health professionals to provide feedback regarding the overall usefulness of the AR system for stimulation purposes in PwD who are at initial to intermediate stages of dementia.The experiment showed that, in general, participants did enjoy doing the activities and were able to complete them independently. As far for the therapists, they showed interest in using the system for stimulation purposes in the future interventions. However, the experiment also revealed that it is important to adapt the activities to the patient's profile.Item HTC Vive Pro Time Performance Benchmark for Scientific Research(The Eurographics Association, 2018) Chénéchal, Morgan Le; Goldman, Jonas Chatel; Bruder, Gerd and Yoshimoto, Shunsuke and Cobb, SueWidespread availability of consumer-level virtual reality (VR) devices creates a venue for their massive use in psychology and neuroscience research. The application of VR to scientific research however poses significant constraints on system performance and stability. In particular, studies with multimodal measurement of human behavior and physiology require precise hardwaresoftware synchronization with precise event labeling (within 10 milliseconds). Previous works investigating suitability of VR systems for research have mainly focused on benchmarking performance in spatial tracking. Therefore, it remains unclear if timing parameters such as latency or jitter in VR motion capture and VR audiovisual stimulation allow for carrying out science under strong time constraints. Here we present the first quantitative test of time performance in VR input and VR feedback of the current state-of-the-art HTC Vive Pro system. Using both low-level Python-based API and a high-level game engine (Unity), our multilevel testing procedure allows us to isolate software influence on observed results. We report that, in both test conditions, latencies are non-negligible considering fine synchronization with multimodal measurements; however, jitters are stable and low, which allows to counter-balance the effect of latency by using constant offsets to re-synchronize multimodal data. Finally, we plan to share our testing hardware setup as an open-source and low-cost benchmark toolkit, allowing objective testing to be easily reproduced by the community in an open collaborative framework.Item Individualized Calibration of Rotation Gain Thresholds for Redirected Walking(The Eurographics Association, 2018) Hutton, Courtney; Ziccardi, Shelby; Medina, Julio; Rosenberg, Evan Suma; Bruder, Gerd and Yoshimoto, Shunsuke and Cobb, SueRedirected walking allows the exploration of large virtual environments within a limited physical space. To achieve this, redirected walking algorithms must maximize the rotation gains applied while remaining imperceptible to the user. Previous research has established population averages for redirection thresholds, including rotation gains. However, these averages do not account for individual variation in tolerance of and susceptibility to redirection. This paper investigates methodologies designed to quickly and accurately calculate rotation gain thresholds for an individual user. This new method is straightforward to implement, requires a minimal amount of space, and takes only a few minutes to estimate a user's personal threshold for rotation gains. Results from a user study support the wide variability in detection thresholds and indicate that the method of parameter estimation through sequential testing (PEST) is viable for efficiently calibrating individual thresholds.Item A Novel Approach for Cooperative Motion Capture (COMOCAP)(The Eurographics Association, 2018) Welch, Gregory; Wang, Tianren; Bishop, Gary; Bruder, Gerd; Bruder, Gerd and Yoshimoto, Shunsuke and Cobb, SueConventional motion capture (MOCAP) systems, e.g., optical systems, typically perform well for one person, but less so for multiple people in close proximity. Measurement quality can decline with distance, and even drop out as source/sensor components are occluded by nearby people. Furthermore, conventional optical MOCAP systems estimate body posture using a global estimation approach employing cameras that are fixed in the environment, typically at a distance such that one person or object can easily occlude another, and the relative error between tracked objects in the scene can increase as they move farther from the cameras and/or closer to each other. Body-relative tracking approaches use body-worn sensors and/or sources to track limbs with respect to the head or torso, for example, taking advantage of the proximity of limbs to the body. We present a novel approach to MOCAP that combines and extends conventional global and body-relative approaches by distributing both sensing and active signaling over each person's body to facilitate body-relative (intra-user) MOCAP for one person and body-body (inter-user) MOCAP for multiple people, in an approach we call cooperative motion capture (COMOCAP). We support the validity of the approach with simulation results from a system comprised of acoustic transceivers (receiver-transmitter units) that provide inter-transceiver range measurements. Optical, magnetic, and other types of transceivers could also be used. Our simulations demonstrate the advantages of this approach to effectively improve accuracy and robustness to occlusions in situations of close proximity between multiple persons.Item Soft Finger-tip Sensing Probe Based on Haptic Primary Colors(The Eurographics Association, 2018) Kato, Fumihiro; Inoue, Yasuyuki; Tachi, Susumu; Bruder, Gerd and Yoshimoto, Shunsuke and Cobb, SueThis paper describes a novel tactile sensing probe based on haptic primary colors (HPCs) and a tactile classifying system. We developed a finger-type soft tactile probe incorporating a sensor to measure three physical quantities: force, vibration, and temperature. We also constructed a tactile probe sliding system on the surface of the material repeatedly. The tactile fluctuation obtained from the tactile probe was recorded, and a frequency analyzed image was generated. In the evaluation experiments, the tactile images were generated by sliding the tactile probe on seven materials (ray fish skin, aluminum plate, rusting hemp fabric, MDF board, tatami mat fabric, acrylic board and rubber sheet). A convolutional neural network (CNN) was constructed and its classification performance was evaluated. In addition, we used tactile images to clarify the classification performance through TLAlexnet (transfer learned Alexnet). Pre-trained TLAlexnet was generated by domain adaptation using the tactile images. The results of TLAlexnet showed the great performance to be 85.0%, 91.7%, and 85.7% with respect to single primary colors of force, vibration, and temperature, respectively, and it improved to 96.4% when using three HPCs. In addition, the classification performance of the proposed seven-layered another CNN that was trained with the obtained tactile images was 98.2% of the CNN constructed using common filtering parameters. Thus, highly accurate classification was realized by using three HPCs elements.Item A Study on AR Authoring using Mobile Devices for Educators(The Eurographics Association, 2018) Chu, Kinfung; Lu, Weiquan; Oka, Kiyoshi; Takashima, Kazuki; Kitamura, Yoshifumi; Bruder, Gerd and Yoshimoto, Shunsuke and Cobb, SueAugmented Reality (AR) on consumer devices is now commonplace and it finds application in areas like online retail and gaming. Among which, school education can especially benefit from the interactivity and expressiveness provided by AR technology, facilitating the learning process of students. Although AR-enabled hardware and applications are becoming increasingly accessible to both students and teachers, the entry requirement for AR authoring is still prohibitively high for school teachers. Given the vast variation in the learning ability of students and school curricula, an AR authoring tool that allows the rapid and easy creation of educational content seems to be very desirable among teachers. This paper proposes a gesture-based control method that satisfies the need of educational AR authoring and presents prototypes that work well with smartphone VR head mounts. Through user studies we show that our proposed control method is simple but effective for basic authoring tasks. Our prototypes are also found to be useful in teaching different concepts that require a high degree of spatial comprehension.Item Studying Levels of Presence in a Virtual Environment Simulating Drug Use in Schools: Effect on Different Character Perspectives(The Eurographics Association, 2018) Christofi, Maria; Baka, Evangelia; Stavroulia, Kalliopi Evangelia; Michael-Grigoriou, Despina; Lanitis, Andreas; Thalmann, Nadia Magnenat; Bruder, Gerd and Yoshimoto, Shunsuke and Cobb, SueThis paper studies the aspect of presence in a Virtual Reality (VR) environment that can be used for training purposes in the education sector and more specifically for teacher training and professional development. During the VR experience trainees had the chance to view the world from different perspectives through the eyes of different characters appearing in the scene. The experimental evaluation conducted aims to examine the effect of viewing the experience from different perspectives and viewpoints in relation to the overall user experience and the level of presence achieved. To accomplish these objectives an experiment was performed investigating presence and the correlation between presence and different viewpoints/perspectives. To measure presence a combination of methods were used including two different questionnaires, the use of an eeg device, EMOTIV EPOC+ and the analysis of heart rates. The results indicate that high levels of presence were recorded and that increased levels of presence are associated with viewing the VE from a student rather than a teacher perspective.