A Gaze Prediction Model for Task-Oriented Virtual Reality
dc.contributor.author | Mammou, Konstantina | en_US |
dc.contributor.author | Mania, Katerina | en_US |
dc.contributor.editor | Günther, Tobias | en_US |
dc.contributor.editor | Montazeri, Zahra | en_US |
dc.date.accessioned | 2025-05-09T09:31:15Z | |
dc.date.available | 2025-05-09T09:31:15Z | |
dc.date.issued | 2025 | |
dc.description.abstract | In this work, we present a gaze prediction model for Virtual Reality task-oriented environments. Unlike past work which focuses on gaze prediction for specific tasks, we investigate the role and potential of temporal continuity in enabling accurate predictions in diverse task categories. The model reduces input complexity while maintaining high prediction accuracy. Evaluated on the OpenNEEDS dataset, it significantly outperforms baseline methods. The model demonstrates strong potential for integration into gaze-based VR interactions and foveated rendering pipelines. Future work will focus on runtime optimization and expanding evaluation across diverse VR scenarios. | en_US |
dc.description.sectionheaders | Posters | |
dc.description.seriesinformation | Eurographics 2025 - Posters | |
dc.identifier.doi | 10.2312/egp.20251020 | |
dc.identifier.isbn | 978-3-03868-269-1 | |
dc.identifier.issn | 1017-4656 | |
dc.identifier.pages | 2 pages | |
dc.identifier.uri | https://doi.org/10.2312/egp.20251020 | |
dc.identifier.uri | https://diglib.eg.org/handle/10.2312/egp20251020 | |
dc.publisher | The Eurographics Association | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | CCS Concepts: Human-centered computing → Virtual reality; Computing methodologies → Neural networks; Rendering | |
dc.subject | Human centered computing → Virtual reality | |
dc.subject | Computing methodologies → Neural networks | |
dc.subject | Rendering | |
dc.title | A Gaze Prediction Model for Task-Oriented Virtual Reality | en_US |