STAR collection (since 2024)

Permanent URI for this collection

ARTICLES

A Survey of Procedural Modelling Methods for Layout Generation of Virtual Scenes

Cogo, Emir
Krupalija, Ehlimana
Prazina, Irfan
Bećirović, Šeila
Okanović, Vensada
Rizvić, Selma
Mulahasanović, Razija Turčinhodžić
ARTICLES

State of the Art in Efficient Translucent Material Rendering with BSSRDF

Liang, Shiyu
Gao, Yang
Hu, Chonghao
Zhou, Peng
Hao, Aimin
Wang, Lili
Qin, Hong
State of the Art Reports

A Survey on Cage-based Deformation of 3D Models

Ströter, Daniel
Thiery, Jean-Marc
Hormann, Kai
Chen, Jiong
Chang, Qingjun
Besler, Sebastian
Mueller-Roemer, Johannes Sebastian
Boubekeur, Tamy
Stork, André
Fellner, Dieter W.
State of the Art Reports

Text-to-3D Shape Generation

Lee, Hanhung
Savva, Manolis
Chang, Angel Xuan
State of the Art Reports

Recent Trends in 3D Reconstruction of General Non-Rigid Scenes

Yunus, Raza
Lenssen, Jan Eric
Niemeyer, Michael
Liao, Yiyi
Rupprecht, Christian
Theobalt, Christian
Pons-Moll, Gerard
Huang, Jia-Bin
Golyanik, Vladislav
Ilg, Eddy
State of the Art Reports

State of the Art on Diffusion Models for Visual Computing

Po, Ryan
Yifan, Wang
Liu, C. Karen
Liu, Lingjie
Mildenhall, Ben
Nießner, Matthias
Ommer, Björn
Theobalt, Christian
Wonka, Peter
Wetzstein, Gordon
Golyanik, Vladislav
Aberman, Kfir
Barron, Jon T.
Bermano, Amit
Chan, Eric
Dekel, Tali
Holynski, Aleksander
Kanazawa, Angjoo
State of the Art Reports

A Survey on Realistic Virtual Human Animations: Definitions, Features and Evaluations

Rekik, Rim
Wuhrer, Stefanie
Hoyet, Ludovic
Zibrek, Katja
Olivier, Anne-Hélène
State of the Art Reports

Virtual Instrument Performances (VIP): A Comprehensive Review

Kyriakou, Theodoros
Alvarez de la Campa Crespo, Merce
Panayiotou, Andreas
Chrysanthou, Yiorgos
Charalambous, Panayiotis
Aristidou, Andreas
State of the Art Reports

Cues to fast-forward collaboration: A Survey of Workspace Awareness and Visual Cues in XR Collaborative Systems

Assaf, Rodrigo
Mendes, Daniel
Rodrigues, Rui
State of the Art Reports

Snow and Ice Animation Methods in Computer Graphics

Goswami, Prashant
Engage all Your Senses

A Systematic Literature Review of User Evaluation in Immersive Analytics

Friedl-Knirsch, Judith
Pointecker, Fabian
Pfistermüller, Sandra
Stach, Christian
Anthes, Christoph
Roth, Daniel
Engage all Your Senses

Open Your Ears and Take a Look: A State-of-the-Art Report on the Integration of Sonification and Visualization

Enge, Kajetan
Elmquist, Elias
Caiola, Valentina
Rönnberg, Niklas
Rind, Alexander
Iber, Michael
Lenzi, Sara
Lan, Fangfei
Höldrich, Robert
Aigner, Wolfgang
Euclidean and Non-Euclidean Spaces

State of the Art of Graph Visualization in non-Euclidean Spaces

Miller, Jacob
Bhatia, Dhruv
Kobourov, Stephen
Euclidean and Non-Euclidean Spaces

The State of the Art in Visual Analytics for 3D Urban Data

Miranda, Fabio
Ortner, Thomas
Moreira, Gustavo
Hosseini, Maryam
Vuckovic, Milena
Biljecki, Filip
Silva, Claudio T.
Lage, Marcos
Ferreira, Nivan
ORIGINAL ARTICLES

Interactive Visualization on Large High‐Resolution Displays: A Survey

Belkacem, Ilyasse
Tominski, Christian
Médoc, Nicolas
Knudsen, Søren
Dachselt, Raimund
Ghoniem, Mohammad


Browse

Recent Submissions

Now showing 1 - 15 of 15
  • Item
    A Survey of Procedural Modelling Methods for Layout Generation of Virtual Scenes
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Cogo, Emir; Krupalija, Ehlimana; Prazina, Irfan; Bećirović, Šeila; Okanović, Vensada; Rizvić, Selma; Mulahasanović, Razija Turčinhodžić; Alliez, Pierre; Wimmer, Michael
    As virtual worlds continue to rise in popularity, so do the expectations of users for the content of virtual scenes. Virtual worlds must be large in scope and offer enough freedom of movement to keep the audience occupied at all times. For content creators, it is difficult to keep up by manually producing the surrounding content. Therefore, the application of procedural modelling techniques is required. Virtual worlds often mimic the real world, which is composed of organized and connected outdoor and indoor layouts. It is expected that all content is present on the virtual scene and that a user can navigate streets, enter buildings, and interact with furniture within a single virtual world. While there are many procedural methods for generating different layout types, they mostly focus only on one layout type, whereas complete scene generation is greatly underrepresented. This paper aims to identify the coverage of layout types by different methods because similar issues exist for the generation of content of different layout types. When creating a new method for layout generation, it is important to know if the results of existing methods can be appended to other methods. This paper presents a survey of existing procedural modelling methods, which were organized into five categories based on the core approach: pure subdivision, grammar‐based, data‐driven, optimization, and simulation. Information about the covered layout types, the possibility of user interaction during the generation process, and the input and output shape types of the generated content is provided for each surveyed method. The input and output shape types of the generated content can be useful to identify which methods can continue the generation by using the output of other methods as their input. It was concluded that all surveyed methods work for only a few different layout types simultaneously. Moreover, only 35% of the surveyed methods offer interaction with the user after completing the initial process of space generation. Most existing approaches do not perform transformations of shape types. A significant number of methods use the irregular shape type as input and generate the same shape type as the output, which is sufficient for coverage of all layout types when generating a complete virtual world.
  • Item
    State of the Art in Efficient Translucent Material Rendering with BSSRDF
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Liang, Shiyu; Gao, Yang; Hu, Chonghao; Zhou, Peng; Hao, Aimin; Wang, Lili; Qin, Hong; Alliez, Pierre; Wimmer, Michael
    Sub‐surface scattering is always an important feature in translucent material rendering. When light travels through optically thick media, its transport within the medium can be approximated using diffusion theory, and is appropriately described by the bidirectional scattering‐surface reflectance distribution function (BSSRDF). BSSRDF methods rely on assumptions about object geometry and light distribution in the medium, which limits their applicability to general participating media problems. However, despite the high computational cost of path tracing, BSSRDF methods are often favoured due to their suitability for real‐time applications. We review these methods and discuss the most recent breakthroughs in this field. We begin by summarizing various BSSRDF models and then implement most of them in a 2D searchlight problem to demonstrate their differences. We focus on acceleration methods using BSSRDF, which we categorize into two primary groups: pre‐computation and texture methods. Then we go through some related topics, including applications and advanced areas where BSSRDF is used, as well as problems that are sometimes important yet are ignored in sub‐surface scattering estimation. In the end of this survey, we point out remaining constraints and challenges, which may motivate future work to facilitate sub‐surface scattering.
  • Item
    A Survey on Cage-based Deformation of 3D Models
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Ströter, Daniel; Thiery, Jean-Marc; Hormann, Kai; Chen, Jiong; Chang, Qingjun; Besler, Sebastian; Mueller-Roemer, Johannes Sebastian; Boubekeur, Tamy; Stork, André; Fellner, Dieter W.; Aristidou, Andreas; Macdonnell, Rachel
    Interactive deformation via control handles is essential in computer graphics for the modeling of 3D geometry. Deformation control structures include lattices for free-form deformation and skeletons for character articulation, but this report focuses on cage-based deformation. Cages for deformation control are coarse polygonal meshes that encase the to-be-deformed geometry, enabling high-resolution deformation. Cage-based deformation enables users to quickly manipulate 3D geometry by deforming the cage. Due to their utility, cage-based deformation techniques increasingly appear in many geometry modeling applications. For this reason, the computer graphics community has invested a great deal of effort in the past decade and beyond into improving automatic cage generation and cage-based deformation. Recent advances have significantly extended the practical capabilities of cage-based deformation methods. As a result, there is a large body of research on cage-based deformation. In this report, we provide a comprehensive overview of the current state of the art in cage-based deformation of 3D geometry. We discuss current methods in terms of deformation quality, practicality, and precomputation demands. In addition, we highlight potential future research directions that overcome current issues and extend the set of practical applications. In conjunction with this survey, we publish an application to unify the most relevant deformation methods. Our report is intended for computer graphics researchers, developers of interactive geometry modeling applications, and 3D modeling and character animation artists.
  • Item
    Text-to-3D Shape Generation
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Lee, Hanhung; Savva, Manolis; Chang, Angel Xuan; Aristidou, Andreas; Macdonnell, Rachel
    Recent years have seen an explosion of work and interest in text-to-3D shape generation. Much of the progress is driven by advances in 3D representations, large-scale pretraining and representation learning for text and image data enabling generative AI models, and differentiable rendering. Computational systems that can perform text-to-3D shape generation have captivated the popular imagination as they enable non-expert users to easily create 3D content directly from text. However, there are still many limitations and challenges remaining in this problem space. In this state-of-the-art report, we provide a survey of the underlying technology and methods enabling text-to-3D shape generation to summarize the background literature. We then derive a systematic categorization of recent work on text-to-3D shape generation based on the type of supervision data required. Finally, we discuss limitations of the existing categories of methods, and delineate promising directions for future work.
  • Item
    Recent Trends in 3D Reconstruction of General Non-Rigid Scenes
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Yunus, Raza; Lenssen, Jan Eric; Niemeyer, Michael; Liao, Yiyi; Rupprecht, Christian; Theobalt, Christian; Pons-Moll, Gerard; Huang, Jia-Bin; Golyanik, Vladislav; Ilg, Eddy; Aristidou, Andreas; Macdonnell, Rachel
    Reconstructing models of the real world, including 3D geometry, appearance, and motion of real scenes, is essential for computer graphics and computer vision. It enables the synthesizing of photorealistic novel views, useful for the movie industry and AR/VR applications. It also facilitates the content creation necessary in computer games and AR/VR by avoiding laborious manual design processes. Further, such models are fundamental for intelligent computing systems that need to interpret real-world scenes and actions to act and interact safely with the human world. Notably, the world surrounding us is dynamic, and reconstructing models of dynamic, non-rigidly moving scenes is a severely underconstrained and challenging problem. This state-of-the-art report (STAR) offers the reader a comprehensive summary of state-of-the-art techniques with monocular and multi-view inputs such as data from RGB and RGB-D sensors, among others, conveying an understanding of different approaches, their potential applications, and promising further research directions. The report covers 3D reconstruction of general non-rigid scenes and further addresses the techniques for scene decomposition, editing and controlling, and generalizable and generative modeling. More specifically, we first review the common and fundamental concepts necessary to understand and navigate the field and then discuss the state-of-the-art techniques by reviewing recent approaches that use traditional and machine-learning-based neural representations, including a discussion on the newly enabled applications. The STAR is concluded with a discussion of the remaining limitations and open challenges.
  • Item
    State of the Art on Diffusion Models for Visual Computing
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Po, Ryan; Yifan, Wang; Golyanik, Vladislav; Aberman, Kfir; Barron, Jon T.; Bermano, Amit; Chan, Eric; Dekel, Tali; Holynski, Aleksander; Kanazawa, Angjoo; Liu, C. Karen; Liu, Lingjie; Mildenhall, Ben; Nießner, Matthias; Ommer, Björn; Theobalt, Christian; Wonka, Peter; Wetzstein, Gordon; Aristidou, Andreas; Macdonnell, Rachel
    The field of visual computing is rapidly advancing due to the emergence of generative artificial intelligence (AI), which unlocks unprecedented capabilities for the generation, editing, and reconstruction of images, videos, and 3D scenes. In these domains, diffusion models are the generative AI architecture of choice. Within the last year alone, the literature on diffusion-based tools and applications has seen exponential growth and relevant papers are published across the computer graphics, computer vision, and AI communities with new works appearing daily on arXiv. This rapid growth of the field makes it difficult to keep up with all recent developments. The goal of this state-of-the-art report (STAR) is to introduce the basic mathematical concepts of diffusion models, implementation details and design choices of the popular Stable Diffusion model, as well as overview important aspects of these generative AI tools, including personalization, conditioning, inversion, among others. Moreover, we give a comprehensive overview of the rapidly growing literature on diffusion-based generation and editing, categorized by the type of generated medium, including 2D images, videos, 3D objects, locomotion, and 4D scenes. Finally, we discuss available datasets, metrics, open challenges, and social implications. This STAR provides an intuitive starting point to explore this exciting topic for researchers, artists, and practitioners alike.
  • Item
    A Survey on Realistic Virtual Human Animations: Definitions, Features and Evaluations
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Rekik, Rim; Wuhrer, Stefanie; Hoyet, Ludovic; Zibrek, Katja; Olivier, Anne-Hélène; Aristidou, Andreas; Macdonnell, Rachel
    Generating realistic animated virtual humans is a problem that has been extensively studied with many applications in different types of virtual environments. However, the creation process of such realistic animations is challenging, especially because of the number and variety of influencing factors, that should then be identified and evaluated. In this paper, we attempt to provide a clearer understanding of how the multiple factors that have been studied in the literature impact the level of realism of animated virtual humans, by providing a survey of studies assessing their realism. This includes a review of features that have been manipulated to increase the realism of virtual humans, as well as evaluation approaches that have been developed. As the challenges of evaluating animated virtual humans in a way that agrees with human perception are still active research problems, this survey further identifies important open problems and directions for future research.
  • Item
    Virtual Instrument Performances (VIP): A Comprehensive Review
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Kyriakou, Theodoros; Alvarez de la Campa Crespo, Merce; Panayiotou, Andreas; Chrysanthou, Yiorgos; Charalambous, Panayiotis; Aristidou, Andreas; Aristidou, Andreas; Macdonnell, Rachel
    Driven by recent advancements in Extended Reality (XR), the hype around the Metaverse, and real-time computer graphics, the transformation of the performing arts, particularly in digitizing and visualizing musical experiences, is an ever-evolving landscape. This transformation offers significant potential in promoting inclusivity, fostering creativity, and enabling live performances in diverse settings. However, despite its immense potential, the field of Virtual Instrument Performances (VIP) has remained relatively unexplored due to numerous challenges. These challenges arise from the complex and multi-modal nature of musical instrument performances, the need for high precision motion capture under occlusions including the intricate interactions between a musician's body and fingers with instruments, the precise synchronization and seamless integration of various sensory modalities, accommodating variations in musicians' playing styles, facial expressions, and addressing instrumentspecific nuances. This comprehensive survey delves into the intersection of technology, innovation, and artistic expression in the domain of virtual instrument performances. It explores musical performance multi-modal databases and investigates a wide range of data acquisition methods, encompassing diverse motion capture techniques, facial expression recording, and various approaches for capturing audio and MIDI data (Musical Instrument Digital Interface). The survey also explores Music Information Retrieval (MIR) tasks, with a particular emphasis on the Musical Performance Analysis (MPA) field, and offers an overview of various works in the realm of Musical Instrument Performance Synthesis (MIPS), encompassing recent advancements in generative models. The ultimate aim of this survey is to unveil the technological limitations, initiate a dialogue about the current challenges, and propose promising avenues for future research at the intersection of technology and the arts.
  • Item
    Cues to fast-forward collaboration: A Survey of Workspace Awareness and Visual Cues in XR Collaborative Systems
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Assaf, Rodrigo; Mendes, Daniel; Rodrigues, Rui; Aristidou, Andreas; Macdonnell, Rachel
    Collaboration in extended reality (XR) environments presents complex challenges that revolve around how users perceive the presence, intentions, and actions of their collaborators. This paper delves into the intricate realm of group awareness, focusing specifically on workspace awareness and the innovative visual cues designed to enhance user comprehension. The research begins by identifying a spectrum of collaborative situations drawn from an analysis of XR prototypes in the existing literature. Then, we describe and introduce a novel classification for workspace awareness, along with an exploration of visual cues recently employed in research endeavors. Lastly, we present the key findings and shine a spotlight on promising yet unexplored topics. This work not only serves as a reference for experienced researchers seeking to inform the design of their own collaborative XR applications but also extends a welcoming hand to newcomers in this dynamic field.
  • Item
    Snow and Ice Animation Methods in Computer Graphics
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Goswami, Prashant; Aristidou, Andreas; Macdonnell, Rachel
    Snow and ice animation methods are becoming increasingly popular in the field of computer graphics (CG). The applications of snow and ice in CG are varied, ranging from generating realistic background landscapes to avalanches and physical interaction with objects in movies, games, etc. Over the past two decades, several methods have been proposed to capture the time-evolving physical appearance or simulation of snow and ice using different models at different scales. This state-of-the-art report aims to identify existing animation methods in the field, provide an up-to-date summary of the research in CG, and identify gaps for promising future work. Furthermore, we also attempt to identify the primarily related work done on snow and ice in some other disciplines, such as civil or mechanical engineering, and draw a parallel with the similarities and differences in CG.
  • Item
    A Systematic Literature Review of User Evaluation in Immersive Analytics
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Friedl-Knirsch, Judith; Pointecker, Fabian; Pfistermüller, Sandra; Stach, Christian; Anthes, Christoph; Roth, Daniel; Garth, Christoph; Kerren, Andreas; Raidou, Renata
    User evaluation is a common and useful tool for systematically generating knowledge and validating novel approaches in the domain of Immersive Analytics. Since this research domain centres around users, user evaluation is of extraordinary relevance. Additionally, Immersive Analytics is an interdisciplinary field of research where different communities bring in their own methodologies. It is vital to investigate and synchronise these different approaches with the long-term goal to reach a shared evaluation framework. While there have been several studies focusing on Immersive Analytics as a whole or on certain aspects of the domain, this is the first systematic review of the state of evaluation methodology in Immersive Analytics. The main objective of this systematic literature review is to illustrate methodologies and research areas that are still underrepresented in user studies by identifying current practice in user evaluation in the domain of Immersive Analytics in coherence with the PRISMA protocol.
  • Item
    Open Your Ears and Take a Look: A State-of-the-Art Report on the Integration of Sonification and Visualization
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Enge, Kajetan; Elmquist, Elias; Caiola, Valentina; Rönnberg, Niklas; Rind, Alexander; Iber, Michael; Lenzi, Sara; Lan, Fangfei; Höldrich, Robert; Aigner, Wolfgang; Garth, Christoph; Kerren, Andreas; Raidou, Renata
    The research communities studying visualization and sonification for data display and analysis share exceptionally similar goals, essentially making data of any kind interpretable to humans. One community does so by using visual representations of data, and the other community employs auditory (non-speech) representations of data. While the two communities have a lot in common, they developed mostly in parallel over the course of the last few decades. With this STAR, we discuss a collection of work that bridges the borders of the two communities, hence a collection of work that aims to integrate the two techniques into one form of audiovisual display, which we argue to be ''more than the sum of the two.'' We introduce and motivate a classification system applicable to such audiovisual displays and categorize a corpus of 57 academic publications that appeared between 2011 and 2023 in categories such as reading level, dataset type, or evaluation system, to mention a few. The corpus also enables a meta-analysis of the field, including regularly occurring design patterns such as type of visualization and sonification techniques, or the use of visual and auditory channels, showing an overall diverse field with different designs. An analysis of a co-author network of the field shows individual teams without many interconnections. The body of work covered in this STAR also relates to three adjacent topics: audiovisual monitoring, accessibility, and audiovisual data art. These three topics are discussed individually in addition to the systematically conducted part of this research. The findings of this report may be used by researchers from both fields to understand the potentials and challenges of such integrated designs while hopefully inspiring them to collaborate with experts from the respective other field.
  • Item
    State of the Art of Graph Visualization in non-Euclidean Spaces
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Miller, Jacob; Bhatia, Dhruv; Kobourov, Stephen; Garth, Christoph; Kerren, Andreas; Raidou, Renata
    Visualizing graphs and networks in non-Euclidean space can have benefits such as natural focus+context in hyperbolic space and the familiarity of interactions in spherical space. Despite work on these topics going back to the mid 1990s, there is no survey, or a part of a survey for this area of research. In this paper we review and categorize over 60 relevant papers and analyze them by geometry, (e.g., spherical, hyperbolic, torus), by contribution (e.g., technique, evaluation, proof, application), and by graph class (e.g., tree, planar, complex).
  • Item
    The State of the Art in Visual Analytics for 3D Urban Data
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Miranda, Fabio; Ortner, Thomas; Moreira, Gustavo; Hosseini, Maryam; Vuckovic, Milena; Biljecki, Filip; Silva, Claudio T.; Lage, Marcos; Ferreira, Nivan; Garth, Christoph; Kerren, Andreas; Raidou, Renata
    Urbanization has amplified the importance of three-dimensional structures in urban environments for a wide range of phenomena that are of significant interest to diverse stakeholders. With the growing availability of 3D urban data, numerous studies have focused on developing visual analysis techniques tailored to the unique characteristics of urban environments. However, incorporating the third dimension into visual analytics introduces additional challenges in designing effective visual tools to tackle urban data's diverse complexities. In this paper, we present a survey on visual analytics of 3D urban data. Our work characterizes published works along three main dimensions (why, what, and how), considering use cases, analysis tasks, data, visualizations, and interactions. We provide a fine-grained categorization of published works from visualization journals and conferences, as well as from a myriad of urban domains, including urban planning, architecture, and engineering. By incorporating perspectives from both urban and visualization experts, we identify literature gaps, motivate visualization researchers to understand challenges and opportunities, and indicate future research directions.
  • Item
    Interactive Visualization on Large High‐Resolution Displays: A Survey
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Belkacem, Ilyasse; Tominski, Christian; Médoc, Nicolas; Knudsen, Søren; Dachselt, Raimund; Ghoniem, Mohammad; Alliez, Pierre; Wimmer, Michael
    In the past few years, large high‐resolution displays (LHRDs) have attracted considerable attention from researchers, industries and application areas that increasingly rely on data‐driven decision‐making. An up‐to‐date survey on the use of LHRDs for interactive data visualization seems warranted to summarize how new solutions meet the characteristics and requirements of LHRDs and take advantage of their unique benefits. In this survey, we start by defining LHRDs and outlining the consequence of LHRD environments on interactive visualizations in terms of more pixels, space, users and devices. Then, we review related literature along the four axes of visualization, interaction, evaluation studies and applications. With these four axes, our survey provides a unique perspective and covers a broad range of aspects being relevant when developing interactive visual data analysis solutions for LHRDs. We conclude this survey by reflecting on a number of opportunities for future research to help the community take up the still‐open challenges of interactive visualization on LHRDs.