42-Issue 6
Permanent URI for this collection
Browse
Browsing 42-Issue 6 by Title
Now showing 1 - 20 of 43
Results Per Page
Sort Options
Item 3D Generative Model Latent Disentanglement via Local Eigenprojection(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Foti, Simone; Koo, Bongjin; Stoyanov, Danail; Clarkson, Matthew J.; Hauser, Helwig and Alliez, PierreDesigning realistic digital humans is extremely complex. Most data‐driven generative models used to simplify the creation of their underlying geometric shape do not offer control over the generation of local shape attributes. In this paper, we overcome this limitation by introducing a novel loss function grounded in spectral geometry and applicable to different neural‐network‐based generative models of 3D head and body meshes. Encouraging the latent variables of mesh variational autoencoders (VAEs) or generative adversarial networks (GANs) to follow the local eigenprojections of identity attributes, we improve latent disentanglement and properly decouple the attribute creation. Experimental results show that our local eigenprojection disentangled (LED) models not only offer improved disentanglement with respect to the state‐of‐the‐art, but also maintain good generation capabilities with training times comparable to the vanilla implementations of the models. Our code and pre‐trained models are available at .Item Accompany Children's Learning for You: An Intelligent Companion Learning System(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Qian, Jiankai; Jiang, Xinbo; Ma, Jiayao; Li, Jiachen; Gao, Zhenzhen; Qin, Xueying; Hauser, Helwig and Alliez, PierreNowadays, parents attach importance to their children's primary education but often lack time and correct pedagogical principles to accompany their children's learning. Besides, existing learning systems cannot perceive children's emotional changes. They may also cause children's self‐control and cognitive problems due to smart devices such as mobile phones and tablets. To tackle these issues, we propose an intelligent companion learning system to accompany children in learning English words, namely the . The IARE realizes the perception and feedback of children's engagement through the intelligent agent (IA) module, and presents the humanized interaction based on projective Augmented Reality (AR). Specifically, IA perceives the children's learning engagement change and spelling status in real‐time through our online lightweight temporal multiple instance attention module and character recognition module, based on which analyses the performance of the individual learning process and gives appropriate feedback and guidance. We allow children to interact with physical letters, thus avoiding the excessive interference of electronic devices. To test the efficacy of our system, we conduct a pilot study with 14 English learning children. The results show that our system can significantly improve children's intrinsic motivation and self‐efficacy.Item Adversarial Interactive Cartoon Sketch Colourization with Texture Constraint and Auxiliary Auto‐Encoder(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Liu, Xiaoyu; Zhu, Shaoqiang; Zeng, Yao; Zhang, Junsong; Hauser, Helwig and Alliez, PierreColouring cartoon sketches can help children develop their intellect and inspire their artistic creativity. Unlike photo colourization or anime line art colourization, cartoon sketch colourization is challenging due to the scarcity of texture information and the irregularity of the line structure, which is mainly reflected in the phenomenon of colour‐bleeding artifacts in generated images. We propose a colourization approach for cartoon sketches, which takes both sketches and colour hints as inputs to produce impressive images. To solve the problem of colour‐bleeding artifacts, we propose a multi‐discriminator colourization framework that introduces a texture discriminator in the conditional generative adversarial network (cGAN). Then we combined this framework with a pre‐trained auxiliary auto‐encoder, where an auxiliary feature loss is designed to further improve colour quality, and a condition input is introduced to increase the generalization ability over hand‐drawn sketches. We present both quantitative and qualitative evaluations, which prove the effectiveness of our proposed method. We test our method on sketches of varying complexity and structure, then build an interactive programme based on our model for user study. Experimental results demonstrate that the method generates natural and consistent colour images in real time from sketches drawn by non‐professionals.Item ARAP Revisited Discretizing the Elastic Energy using Intrinsic Voronoi Cells(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Finnendahl, Ugo; Schwartz, Matthias; Alexa, Marc; Hauser, Helwig and Alliez, PierreAs‐rigid‐as‐possible (ARAP) surface modelling is widely used for interactive deformation of triangle meshes. We show that ARAP can be interpreted as minimizing a discretization of an elastic energy based on non‐conforming elements defined over dual orthogonal cells of the mesh. Using the Voronoi cells rather than an orthogonal dual of the extrinsic mesh guarantees that the energy is non‐negative over each cell. We represent the intrinsic Delaunay edges extrinsically as polylines over the mesh, encoded in barycentric coordinates relative to the mesh vertices. This modification of the original ARAP energy, which we term , remedies problems stemming from non‐Delaunay edges in the original approach. Unlike the spokes‐and‐rims version of the ARAP approach it is less susceptible to the triangulation of the surface. We provide examples of deformations generated with iARAP and contrast them with other versions of ARAP. We also discuss the properties of the Laplace‐Beltrami operator implicitly introduced with the new discretization.Item Are We There Yet? A Roadmap of Network Visualization from Surveys to Task Taxonomies(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Filipov, Velitchko; Arleo, Alessio; Miksch, Silvia; Hauser, Helwig and Alliez, PierreNetworks are abstract and ubiquitous data structures, defined as a set of data points and relationships between them. Network visualization provides meaningful representations of these data, supporting researchers in understanding the connections, gathering insights, and detecting and identifying unexpected patterns. Research in this field is focusing on increasingly challenging problems, such as visualizing dynamic, complex, multivariate, and geospatial networked data. This ever‐growing, and widely varied, body of research led to several surveys being published, each covering one or more disciplines of network visualization. Despite this effort, the variety and complexity of this research represents an obstacle when surveying the domain and building a comprehensive overview of the literature. Furthermore, there exists a lack of clarification and uniformity between the terminology used in each of the surveys, which requires further effort when mapping and categorizing the plethora of different visualization techniques and approaches. In this paper, we aim at providing researchers and practitioners alike with a “roadmap” detailing the current research trends in the field of network visualization. We design our contribution as a meta‐survey where we discuss, summarize, and categorize recent surveys and task taxonomies published in the context of network visualization. We identify more and less saturated disciplines of research and consolidate the terminology used in the surveyed literature. We also survey the available task taxonomies, providing a comprehensive analysis of their varying support to each network visualization discipline and by establishing and discussing a classification for the individual tasks. With this combined analysis of surveys and task taxonomies, we provide an overarching structure of the field, from which we extrapolate the current state of research and promising directions for future work.Item Break and Splice: A Statistical Method for Non‐Rigid Point Cloud Registration(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Gao, Qinghong; Zhao, Yan; Xi, Long; Tang, Wen; Wan, Tao Ruan; Hauser, Helwig and Alliez, Pierre3D object matching and registration on point clouds are widely used in computer vision. However, most existing point cloud registration methods have limitations in handling non‐rigid point sets or topology changes (. connections and separations). As a result, critical characteristics such as large inter‐frame motions of the point clouds may not be accurately captured. This paper proposes a statistical algorithm for non‐rigid point sets registration, addressing the challenge of handling topology changes without the need to estimate correspondence. The algorithm uses a novel framework to treat the non‐rigid registration challenges as a reproduction process and a Dirichlet Process Gaussian Mixture Model (DPGMM) to cluster a pair of point sets. Labels are assigned to the source point set with an iterative classification procedure, and the source is registered to the target with the same labels using the Bayesian Coherent Point Drift (BCPD) method. The results demonstrate that the proposed approach achieves lower registration errors and efficiently registers point sets undergoing topology changes and large inter‐frame motions. The proposed approach is evaluated on several data sets using various qualitative and quantitative metrics. The results demonstrate that the framework outperforms state‐of‐the‐art methods, achieving an average error reduction of about 60% and a registration time reduction of about 57.8%.Item A Characterization of Interactive Visual Data Stories With a Spatio‐Temporal Context(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Mayer, Benedikt; Steinhauer, Nastasja; Preim, Bernhard; Meuschke, Monique; Hauser, Helwig and Alliez, PierreLarge‐scale issues with a spatial and temporal context such as the COVID‐19 pandemic, the war against Ukraine, and climate change have given visual storytelling with data a lot of attention in online journalism, confirming its high effectiveness and relevance for conveying stories. Thus, new ways have emerged that expand the space of visual storytelling techniques. However, interactive visual data stories with a spatio‐temporal context have not been extensively studied yet. Particularly quantitative information about the used layout and media, the visual storytelling techniques, and the visual encoding of space‐time is relevant to get a deeper understanding of how such stories are commonly built to convey complex information in a comprehensible way. Covering these three aspects, we propose a design space derived by merging and adjusting existing approaches, which we used to categorize 130 collected web‐based visual data stories with a spatio‐temporal context from between 2018 and 2022. An analyzis of the collected data reveals the power of large‐scale issues to shape the landscape of storytelling techniques and a trend towards a simplified consumability of stories. Taken together, our findings can serve story authors as inspiration regarding which storytelling techniques to include in their own spatio‐temporal data stories.Item Corrigendum to “Making Procedural Water Waves Boundary‐aware”, “Primal/Dual Descent Methods for Dynamics”, and “Detailed Rigid Body Simulation with Extended Position Based Dynamics”(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Hauser, Helwig and Alliez, PierreItem Deep Learning for Scene Flow Estimation on Point Clouds: A Survey and Prospective Trends(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Li, Zhiqi; Xiang, Nan; Chen, Honghua; Zhang, Jianjun; Yang, Xiaosong; Hauser, Helwig and Alliez, PierreAiming at obtaining structural information and 3D motion of dynamic scenes, scene flow estimation has been an interest of research in computer vision and computer graphics for a long time. It is also a fundamental task for various applications such as autonomous driving. Compared to previous methods that utilize image representations, many recent researches build upon the power of deep analysis and focus on point clouds representation to conduct 3D flow estimation. This paper comprehensively reviews the pioneering literature in scene flow estimation based on point clouds. Meanwhile, it delves into detail in learning paradigms and presents insightful comparisons between the state‐of‐the‐art methods using deep learning for scene flow estimation. Furthermore, this paper investigates various higher‐level scene understanding tasks, including object tracking, motion segmentation, etc. and concludes with an overview of foreseeable research trends for scene flow estimation.Item Distributed Poisson Surface Reconstruction(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Kazhdan, M.; Hoppe, H.; Hauser, Helwig and Alliez, PierreScreened Poisson surface reconstruction robustly creates meshes from oriented point sets. For large datasets, the technique requires hours of computation and significant memory. We present a method to parallelize and distribute this computation over multiple commodity client nodes. The method partitions space on one axis into adaptively sized slabs containing balanced subsets of points. Because the Poisson formulation involves a global system, the challenge is to maintain seamless consistency at the slab boundaries and obtain a reconstruction that is indistinguishable from the serial result. To this end, we express the reconstructed indicator function as a sum of a low‐resolution term computed on a server and high‐resolution terms computed on distributed clients. Using a client–server architecture, we map the computation onto a sequence of serial server tasks and parallel client tasks, separated by synchronization barriers. This architecture also enables low‐memory evaluation on a single computer, albeit without speedup. We demonstrate a 700 million vertex reconstruction of the billion point David statue scan in less than 20 min on a 65‐node cluster with a maximum memory usage of 45 GB/node, or in 14 h on a single node.Item Efficient Hardware Acceleration of Robust Volumetric Light Transport Simulation(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Moonen, Nol; Jalba, Andrei C.; Hauser, Helwig and Alliez, PierreEfficiently simulating the full range of light effects in arbitrary input scenes that contain participating media is a difficult task. Unified points, beams and paths (UPBP) is an algorithm capable of capturing a wide range of media effects, by combining bidirectional path tracing (BPT) and photon density estimation (PDE) with multiple importance sampling (MIS). A computationally expensive task of UPBP is the MIS weight computation, performed each time a light path is formed. We derive an efficient algorithm to compute the MIS weights for UPBP, which improves over previous work by eliminating the need to iterate over the path vertices. We achieve this by maintaining recursive quantities as subpaths are generated, from which the subpath weights can be computed. In this way, the full path weight can be computed by only using the data cached at the two vertices at the ends of the subpaths. Furthermore, a costly part of PDE is the search for nearby photon points and beams. Previous work has shown that a spatial data structure for photon mapping can be implemented using the hardware‐accelerated bounding volume hierarchy of NVIDIA's RTX GPUs. We show that the same technique can be applied to different types of volumetric PDE and compare the performance of these data structures with the state of the art. Finally, using our new algorithm and data structures we fully implement UPBP on the GPU which we, to the best of our knowledge, are the first to do so.Item Episodes and Topics in Multivariate Temporal Data(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Andrienko, Natalia; Andrienko, Gennady; Shirato, Gota; Hauser, Helwig and Alliez, PierreThe term ‘episode’ refers to a time interval in the development of a dynamic process or behaviour of an entity. Episode‐based data consist of a set of episodes that are described using time series of multiple attribute values. Our research problem involves analysing episode‐based data in order to understand the distribution of multi‐attribute dynamic characteristics across a set of episodes. To solve this problem, we applied an existing theoretical model and developed a general approach that involves incrementally increasing data abstraction. We instantiated this general approach in an analysis procedure in which the value variation of each attribute within an episode is represented by a combination of symbols treated as a ‘word’. The variation of multiple attributes is thus represented by a combination of ‘words’ treated as a ‘text’. In this way, the the set of episodes is transformed to a collection of text documents. Topic modelling techniques applied to this collection find groups of related (i.e. repeatedly co‐occurring) ‘words’, which are called ‘topics’. Given that the ‘words’ encode variation patterns of individual attributes, the ‘topics’ represent patterns of joint variation of multiple attributes. In the following steps, analysts interpret the topics and examine their distribution across all episodes using interactive visualizations. We test the effectiveness of the procedure by applying it to two types of episode‐based data with distinct properties and introduce a range of generic and data type‐specific visualization techniques that can support the interpretation and exploration of topic distribution.Item EvIcon: Designing High‐Usability Icon with Human‐in‐the‐loop Exploration and IconCLIP(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Shen, I‐Chao; Cherng, Fu‐Yin; Igarashi, Takeo; Lin, Wen‐Chieh; Chen, Bing‐Yu; Hauser, Helwig and Alliez, PierreInterface icons are prevalent in various digital applications. Due to limited time and budgets, many designers rely on informal evaluation, which often results in poor usability icons. In this paper, we propose a unique human‐in‐the‐loop framework that allows our target users, that is novice and professional user interface (UI) designers, to improve the usability of interface icons efficiently. We formulate several usability criteria into a perceptual usability function and enable users to iteratively revise an icon set with an interactive design tool, EvIcon. We take a large‐scale pre‐trained joint image‐text embedding (CLIP) and fine‐tune it to embed icon visuals with icon tags in the same embedding space (IconCLIP). During the revision process, our design tool provides two types of instant perceptual usability feedback. First, we provide perceptual usability feedback modelled by deep learning models trained on IconCLIP embeddings and crowdsourced perceptual ratings. Second, we use the embedding space of IconCLIP to assist users in improving icons' visual distinguishability among icons within the user‐prepared icon set. To provide the perceptual prediction, we compiled , the first large‐scale dataset of perceptual usability ratings over 10,000 interface icons, by conducting a crowdsourcing study. We demonstrated that our framework could benefit UI designers' interface icon revision process with a wide range of professional experience. Moreover, the interface icons designed using our framework achieved better semantic distance and familiarity, verified by an additional online user study.Item Evonne: A Visual Tool for Explaining Reasoning with OWL Ontologies and Supporting Interactive Debugging(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Méndez, J.; Alrabbaa, C.; Koopmann, P.; Langner, R.; Baader, F.; Dachselt, R.; Hauser, Helwig and Alliez, PierreOWL is a powerful language to formalize terminologies in an ontology. Its main strength lies in its foundation on description logics, allowing systems to automatically deduce implicit information through logical reasoning. However, since ontologies are often complex, understanding the outcome of the reasoning process is not always straightforward. Unlike already existing tools for exploring ontologies, our visualization tool is tailored towards explaining logical consequences. In addition, it supports the debugging of unwanted consequences and allows for an interactive comparison of the impact of removing statements from the ontology. Our visual approach combines (1) specialized views for the explanation of logical consequences and the structure of the ontology, (2) employing multiple layout modes for iteratively exploring explanations, (3) detailed explanations of specific reasoning steps, (4) cross‐view highlighting and colour coding of the visualization components, (5) features for dealing with visual complexity and (6) comparison and exploration of possible fixes to the ontology. We evaluated in a qualitative study with 16 experts in logics, and their positive feedback confirms the value of our concepts for explaining reasoning and debugging ontologies.Item Exploration of Player Behaviours from Broadcast Badminton Videos(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Chen, Wei‐Ting; Wu, Hsiang‐Yun; Shih, Yun‐An; Wang, Chih‐Chuan; Wang, Yu‐Shuen; Hauser, Helwig and Alliez, PierreUnderstanding an opposing player's behaviours and weaknesses is often the key to winning a badminton game. This study presents a system to extract game data from broadcast badminton videos, and visualize the extracted data to help coaches and players develop effective tactics. Specifically, we apply state‐of‐the‐art machine learning methods to partition a broadcast video into segments, in which each video segment shows a badminton rally. Next, we detect players' feet in each video frame and transform the player positions into the court coordinate system. Finally, we detect hit frames in each rally, in which the shuttle moves towards the opposite directions. By visualizing the extracted data, our system conveys when and where players hit the shuttle in historical games. Since players tend to smash or drop shuttles under a specific location, we provide users with interactive tools to filter data and focus on the distributions conditioned by player positions. This strategy also reduces visual clutter. Besides, our system plots the shuttle hitting distributions side‐by‐side, enabling visual comparison and analysis of player behaviours under different conditions. The results and the use cases demonstrate the feasibility of our system.Item Faster Edge‐Path Bundling through Graph Spanners(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Wallinger, Markus; Archambault, Daniel; Auber, David; Nöllenburg, Martin; Peltonen, Jaakko; Hauser, Helwig and Alliez, PierreEdge‐Path bundling is a recent edge bundling approach that does not incur ambiguities caused by bundling disconnected edges together. Although the approach produces less ambiguous bundlings, it suffers from high computational cost. In this paper, we present a new Edge‐Path bundling approach that increases the computational speed of the algorithm without reducing the quality of the bundling. First, we demonstrate that biconnected components can be processed separately in an Edge‐Path bundling of a graph without changing the result. Then, we present a new edge bundling algorithm that is based on observing and exploiting a strong relationship between Edge‐Path bundling and graph spanners. Although the worst case complexity of the approach is the same as of the original Edge‐Path bundling algorithm, we conduct experiments to demonstrate that the new approach is – times faster than Edge‐Path bundling depending on the dataset, which brings its practical running time more in line with traditional edge bundling algorithms.Item Feature Representation for High‐resolution Clothed Human Reconstruction(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Pu, Juncheng; Liu, Li; Fu, Xiaodong; Su, Zhuo; Liu, Lijun; Peng, Wei; Hauser, Helwig and Alliez, PierreDetailed and accurate feature representation is essential for high‐resolution reconstruction of clothed human. Herein we introduce a unified feature representation for clothed human reconstruction, which can adapt to changeable posture and various clothing details. The whole method can be divided into two parts: the human shape feature representation and the details feature representation. Specifically, we firstly combine the voxel feature learned from semantic voxel with the pixel feature from input image as an implicit representation for human shape. Then, the details feature mixed with the clothed layer feature and the normal feature is used to guide the multi‐layer perceptron to capture geometric surface details. The key difference from existing methods is that we use the clothing semantics to infer clothed layer information, and further restore the layer details with geometric height. We qualitative and quantitative experience results demonstrate that proposed method outperforms existing methods in terms of handling limb swing and clothing details. Our method provides a new solution for clothed human reconstruction with high‐resolution details (style, wrinkles and clothed layers), and has good potential in three‐dimensional virtual try‐on and digital characters.Item Garment Model Extraction from Clothed Mannequin Scan(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Gao, Qiqi; Taketomi, Takafumi; Hauser, Helwig and Alliez, PierreModelling garments with rich details require enormous time and expertise of artists. Recent works re‐construct garments through segmentation of clothed human scan. However, existing methods rely on certain human body templates and do not perform as well on loose garments such as skirts. This paper presents a two‐stage pipeline for extracting high‐fidelity garments from static scan data of clothed mannequins. Our key contribution is a novel method for tracking both tight and loose boundaries between garments and mannequin skin. Our algorithm enables the modelling of off‐the‐shelf clothing with fine details. It is independent of human template models and requires only minimal mannequin priors. The effectiveness of our method is validated through quantitative and qualitative comparison with the baseline method. The results demonstrate that our method can accurately extract both tight and loose garments within reasonable time.Item Harmonized Portrait‐Background Image Composition(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Wang, Yijiang; Li, Yuqi; Wang, Chong; Ye, Xulun; Hauser, Helwig and Alliez, PierrePortrait‐background image composition is a widely used operation in selfie editing, video meeting, and other portrait applications. To guarantee the realism of the composited images, the appearance of the foreground portraits needs to be adjusted to fit the new background images. Existing image harmonization approaches are proposed to handle general foreground objects, thus lack the special ability to adjust portrait foregrounds. In this paper, we present a novel end‐to‐end network architecture to learn both the content features and style features for portrait‐background composition. The method adjusts the appearance of portraits to make them compatible with backgrounds, while the generation of the composited images satisfies the prior of a style‐based generator. We also propose a pipeline to generate high‐quality and high‐variety synthesized image datasets for training and evaluation. The proposed method outperforms other state‐of‐the‐art methods both on the synthesized dataset and the real composited images and shows robust performance in video applications.Item iFUNDit: Visual Profiling of Fund Investment Styles(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Zhang, R.; Ku, B. K.; Wang, Y.; Yue, X.; Liu, S.; Li, K.; Qu, H.; Hauser, Helwig and Alliez, PierreMutual funds are becoming increasingly popular with the emergence of Internet finance. Clear profiling of a fund's investment style is crucial for fund managers to evaluate their investment strategies, and for investors to understand their investment. However, it is challenging to profile a fund's investment style as it requires a comprehensive analysis of complex multi‐dimensional temporal data. In addition, different fund managers and investors have different focuses when analysing a fund's investment style. To address the issue, we propose , an interactive visual analytic system for fund investment style analysis. The system decomposes a fund's critical features into performance attributes and investment style factors, and visualizes them in a set of coupled views: a fund and manager view, to delineate the distribution of funds' and managers' critical attributes on the market; a cluster view, to show the similarity of investment styles between different funds; and a detail view, to analyse the evolution of fund investment style. The system provides a holistic overview of fund data and facilitates a streamlined analysis of investment style at both the fund and the manager level. The effectiveness and usability of the system are demonstrated through domain expert interviews and case studies by using a real mutual fund dataset.
- «
- 1 (current)
- 2
- 3
- »