Browsing by Author "Choi, Soo-Mi"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Avatar Emotion Recognition using Non-verbal Communication(The Eurographics Association, 2023) Bazargani, Jalal Safari; Sadeghi-Niaraki, Abolghasem; Choi, Soo-Mi; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.Among the sources of information about emotions, body movements, recognized as ''kinesics'' in non-verbal communication, have received limited attention. This research gap suggests the need to investigate suitable body movement-based approaches for making communication in virtual environments more realistic. Therefore, this study proposes an automated emotion recognition approach suitable for use in virtual environments. This study consists of two pipelines for emotion recognition. For the first pipeline, i.e., upper-body keypoint-based recognition, the HEROES video dataset was employed to train a bidirectional long short-term memory model using upper-body keypoints capable of predicting four discrete emotions: boredom, disgust, happiness, and interest, achieving an accuracy of 84%. For the second pipeline, i.e., wrist-movement-based recognition, a random forest model was trained based on 17 features computed from acceleration data of wrist movements along each axis. The model achieved an accuracy of 63% in distinguishing three discrete emotions: sadness, neutrality, and happiness. The findings suggest that the proposed approach is a noticeable step toward automated emotion recognition, without using any additional sensors other than the head mounted display (HMD).Item Emotion-based Interaction Technique Using User's Voice and Facial Expressions in Virtual and Augmented Reality(The Eurographics Association, 2023) Ko, Beom-Seok; Kang, Ho-San; Lee, Kyuhong; Braunschweiler, Manuel; Zünd, Fabio; Sumner, Robert W.; Choi, Soo-Mi; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.This paper presents a novel interaction approach based on a user's emotions within augmented reality (AR) and virtual reality (VR) environments to achieve immersive interaction with virtual intelligent characters. To identify the user's emotions through voice, the Google Speech-to-Text API is used to transcribe speech and then the RoBERTa language processing model is utilized to classify emotions. In AR environment, the intelligent character can change the styles and properties of objects based on the recognized user's emotions during a dialog. On the other side, in VR environment, the movement of the user's eyes and lower face is tracked by VIVE Pro Eye and Facial Tracker, and EmotionNet is used for emotion recognition. Then, the virtual environment can be changed based on the recognized user's emotions. Our findings present an interesting idea for integrating emotionally intelligent characters in AR/VR using generative AI and facial expression recognition.