Repository logo
  • Communities & Collections
  • All of DSpace
  • English
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Monclús, Eva"

Now showing 1 - 2 of 2
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    AvatarGo: Plug and Play self-avatars for VR
    (The Eurographics Association, 2022) Ponton, Jose Luis; Monclús, Eva; Pelechano, Nuria; Pelechano, Nuria; Vanderhaeghe, David
    The use of self-avatars in a VR application can enhance presence and embodiment which leads to a better user experience. In collaborative VR it also facilitates non-verbal communication. Currently it is possible to track a few body parts with cheap trackers and then apply IK methods to animate a character. However, the correspondence between trackers and avatar joints is typically fixed ad-hoc, which is enough to animate the avatar, but causes noticeable mismatches between the user's body pose and the avatar. In this paper we present a fast and easy to set up system to compute exact offset values, unique for each user, which leads to improvements in avatar movement. Our user study shows that the Sense of Embodiment increased significantly when using exact offsets as opposed to fixed ones. We also allowed the users to see a semitransparent avatar overlaid with their real body to objectively evaluate the quality of the avatar movement with our technique.
  • No Thumbnail Available
    Item
    An Interaction Metaphor for Enhanced VR-based Volume Segmentation
    (The Eurographics Association, 2023) Monclús, Eva; Vázquez, Pere-Pau; Hansen, Christian; Procter, James; Renata G. Raidou; Jönsson, Daniel; Höllt, Thomas
    The segmentation of medical models is a complex and time-intensive process required for both diagnosis and surgical preparation. Despite the advancements in deep learning, neural networks can only automatically segment a limited number of structures, often requiring further validation by a domain expert. In numerous instances, manual segmentation is still necessary. Virtual Reality (VR) technology can enhance the segmentation process by providing improved perception of segmentation outcomes and enabling interactive supervision by experts. But inspecting how the progress of the segmentation algorithm is evolving, and defining new seeds requires seeing the inner layers of the volume, which can be costly and difficult to achieve with typical metaphors such as clipping planes. In this paper, we introduce a wedge-shaped 3D interaction metaphor designed to facilitate VR-based segmentation through detailed inspection and guidance. User evaluations demonstrated increased satisfaction with usability and faster task completion times using the tool.

Eurographics Association © 2013-2025  |  System hosted at Graz University of Technology      
DSpace software copyright © 2002-2025 LYRASIS

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback