Browsing by Author "Sumner, Robert W."
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Emotion-based Interaction Technique Using User's Voice and Facial Expressions in Virtual and Augmented Reality(The Eurographics Association, 2023) Ko, Beom-Seok; Kang, Ho-San; Lee, Kyuhong; Braunschweiler, Manuel; Zünd, Fabio; Sumner, Robert W.; Choi, Soo-Mi; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.This paper presents a novel interaction approach based on a user's emotions within augmented reality (AR) and virtual reality (VR) environments to achieve immersive interaction with virtual intelligent characters. To identify the user's emotions through voice, the Google Speech-to-Text API is used to transcribe speech and then the RoBERTa language processing model is utilized to classify emotions. In AR environment, the intelligent character can change the styles and properties of objects based on the recognized user's emotions during a dialog. On the other side, in VR environment, the movement of the user's eyes and lower face is tracked by VIVE Pro Eye and Facial Tracker, and EmotionNet is used for emotion recognition. Then, the virtual environment can be changed based on the recognized user's emotions. Our findings present an interesting idea for integrating emotionally intelligent characters in AR/VR using generative AI and facial expression recognition.Item Keys-to-Sim: Transferring Hand-Crafted Key-framed Animations to Simulated Figures using Wide Band Stochastic Trajectory Optimization(The Eurographics Association, 2018) Borer, Dominik; Guay, Martin; Sumner, Robert W.; {Tam, Gary K. L. and Vidal, FranckThe vision of fully simulating characters and their environments has the potential to offer rich interactions between characters and objects in the virtual world. However, this introduces a challenging problem similar to controlling robotic figures: computing the necessary torques to perform a given task. In this paper, we address the problem of transferring hand-crafted kinematic motions to a fully simulated figure, by computing open-loop controls necessary to reproduce the target motion. One key ingredient to successful control is the mechanical feasibility of the target motion. While several methods have been successful at replicating human captured motion, there has not yet been a method capable of handling the case of artist-authored key-framed movements that can violate the laws of physics or go beyond the mechanical limits of the character. Due to the curse of dimensionality, sampling-based optimization methods typically restrict the search to a narrow band which limits exploration of feasible motions—resulting in a failure to reproduce the desired motion when a large deviation is required. In this paper, we solve this problem by combining a window-based breakdown of the controls on the temporal dimension, together with a global wide search strategy that keeps locally sub-optimal samples throughout the optimization.Item Mathematics Input for Educational Applications in Virtual Reality(The Eurographics Association, 2021) Sansonetti, Luigi; Chatain, Julia; Caldeira, Pedro; Fayolle, Violaine; Kapur, Manu; Sumner, Robert W.; Orlosky, Jason and Reiners, Dirk and Weyers, BenjaminVirtual Reality (VR) enables new ways of learning by providing an interactive environment to learn through failure and by allowing new interaction methods engaging the users' bodies. Literature from productive failure and embodied cognition shows that these two aspects are particularly important for mathematics education. However, very little research has been looking into how to input mathematical expressions in VR. This gap impairs the learning process as it prevents the learners from connecting the VR mathematical objects with their formal representations. In this paper, we bridge this gap by presenting two interaction techniques for mathematics input in VR: a Keyboard-like method and a Drag-and-drop method. We report the results of our quantitative user study in terms of usability, ease of learning, low overhead, task load, and motion sickness.