Emotion-based Interaction Technique Using User's Voice and Facial Expressions in Virtual and Augmented Reality
dc.contributor.author | Ko, Beom-Seok | en_US |
dc.contributor.author | Kang, Ho-San | en_US |
dc.contributor.author | Lee, Kyuhong | en_US |
dc.contributor.author | Braunschweiler, Manuel | en_US |
dc.contributor.author | Zünd, Fabio | en_US |
dc.contributor.author | Sumner, Robert W. | en_US |
dc.contributor.author | Choi, Soo-Mi | en_US |
dc.contributor.editor | Chaine, Raphaëlle | en_US |
dc.contributor.editor | Deng, Zhigang | en_US |
dc.contributor.editor | Kim, Min H. | en_US |
dc.date.accessioned | 2023-10-09T07:42:56Z | |
dc.date.available | 2023-10-09T07:42:56Z | |
dc.date.issued | 2023 | |
dc.description.abstract | This paper presents a novel interaction approach based on a user's emotions within augmented reality (AR) and virtual reality (VR) environments to achieve immersive interaction with virtual intelligent characters. To identify the user's emotions through voice, the Google Speech-to-Text API is used to transcribe speech and then the RoBERTa language processing model is utilized to classify emotions. In AR environment, the intelligent character can change the styles and properties of objects based on the recognized user's emotions during a dialog. On the other side, in VR environment, the movement of the user's eyes and lower face is tracked by VIVE Pro Eye and Facial Tracker, and EmotionNet is used for emotion recognition. Then, the virtual environment can be changed based on the recognized user's emotions. Our findings present an interesting idea for integrating emotionally intelligent characters in AR/VR using generative AI and facial expression recognition. | en_US |
dc.description.sectionheaders | Posters | |
dc.description.seriesinformation | Pacific Graphics Short Papers and Posters | |
dc.identifier.doi | 10.2312/pg.20231286 | |
dc.identifier.isbn | 978-3-03868-234-9 | |
dc.identifier.pages | 121-122 | |
dc.identifier.pages | 2 pages | |
dc.identifier.uri | https://doi.org/10.2312/pg.20231286 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/pg20231286 | |
dc.publisher | The Eurographics Association | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | CCS Concepts: Human-centered computing -> Human computer interaction (HCI); Hardware -> VIVE Pro Eye; Facial Tracker | |
dc.subject | Human centered computing | |
dc.subject | Human computer interaction (HCI) | |
dc.subject | Hardware | |
dc.subject | VIVE Pro Eye | |
dc.subject | Facial Tracker | |
dc.title | Emotion-based Interaction Technique Using User's Voice and Facial Expressions in Virtual and Augmented Reality | en_US |