Browsing by Author "Lemonari, Marilena"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Collaborative VR: Solving Riddles in the Concept of Escape Rooms(The Eurographics Association, 2023) Ioannou, Afxentis; Lemonari, Marilena; Liarokapis, Fotis; Aristidou, Andreas; Pelechano, Nuria; Liarokapis, Fotis; Rohmer, Damien; Asadipour, AliThe recent state of VR technology enables users to have quick and easy access to multiple VR functionalities, prompting researchers to explore various aspects of user experiences in virtual environments. In this work, we study alternative means of user communication in collaborative virtual environments (CVEs). We are especially interested in how users manage to convey messages to each other while not being able to see, hear, or text one another. We aim to understand how users choose to utilize the tools provided to them in virtual environments and report their feedback i.e., how this affects engagement level, performance, etc. The objective of our work is to be able to determine the effects of integrating alternative means of communication in users' experience in VR; to examine this, we choose a case study of a collaborative VR escape room. We carry out a user study to evaluate our hypotheses on the effects of nontraditional communication means when performing computer-supported cooperative work (CSCW). We find that players manage to complete their tasks similarly to real-life scenarios, even when not allowing for traditional ways of interpersonal interactions. Through our user survey, we also conclude that it is worth integrating this communication option in other applications as well, which poses further questions as to what is the full potential of incorporating several alternative functionalities that people subtly use in real-life, in VR.Item LexiCrowd: A Learning Paradigm towards Text to Behaviour Parameters for Crowds(The Eurographics Association, 2024) Lemonari, Marilena; Andreou, Nefeli; Pelechano, Nuria; Charalambous, Panayiotis; Chrysanthou, Yiorgos; Pelechano, Nuria; Pettré, JulienCreating believable virtual crowds, controllable by high-level prompts, is essential to creators for trading-off authoring freedom and simulation quality. The flexibility and familiarity of natural language in particular, motivates the use of text to guide the generation process. Capturing the essence of textually described crowd movements in the form of meaningful and usable parameters, is challenging due to the lack of paired ground truth data, and inherent ambiguity between the two modalities. In this work, we leverage a pre-trained Large Language Model (LLM) to create pseudo-pairs of text and behaviour labels. We train a variational auto-encoder (VAE) on the synthetic dataset, constraining the latent space into interpretable behaviour parameters by incorporating a latent label loss. To showcase our model's capabilities, we deploy a survey where humans provide textual descriptions of real crowd datasets. We demonstrate that our model is able to parameterise unseen sentences and produce novel behaviours, capturing the essence of the given sentence; our behaviour space is compatible with simulator parameters, enabling the generation of plausible crowds (text-to-crowds). Also, we conduct feasibility experiments exhibiting the potential of the output text embeddings in the premise of full sentence generation from a behaviour profile.