42-Issue 6
Permanent URI for this collection
Browse
Browsing 42-Issue 6 by Issue Date
Now showing 1 - 20 of 43
Results Per Page
Sort Options
Item Feature Representation for High‐resolution Clothed Human Reconstruction(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Pu, Juncheng; Liu, Li; Fu, Xiaodong; Su, Zhuo; Liu, Lijun; Peng, Wei; Hauser, Helwig and Alliez, PierreDetailed and accurate feature representation is essential for high‐resolution reconstruction of clothed human. Herein we introduce a unified feature representation for clothed human reconstruction, which can adapt to changeable posture and various clothing details. The whole method can be divided into two parts: the human shape feature representation and the details feature representation. Specifically, we firstly combine the voxel feature learned from semantic voxel with the pixel feature from input image as an implicit representation for human shape. Then, the details feature mixed with the clothed layer feature and the normal feature is used to guide the multi‐layer perceptron to capture geometric surface details. The key difference from existing methods is that we use the clothing semantics to infer clothed layer information, and further restore the layer details with geometric height. We qualitative and quantitative experience results demonstrate that proposed method outperforms existing methods in terms of handling limb swing and clothing details. Our method provides a new solution for clothed human reconstruction with high‐resolution details (style, wrinkles and clothed layers), and has good potential in three‐dimensional virtual try‐on and digital characters.Item Deep Learning for Scene Flow Estimation on Point Clouds: A Survey and Prospective Trends(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Li, Zhiqi; Xiang, Nan; Chen, Honghua; Zhang, Jianjun; Yang, Xiaosong; Hauser, Helwig and Alliez, PierreAiming at obtaining structural information and 3D motion of dynamic scenes, scene flow estimation has been an interest of research in computer vision and computer graphics for a long time. It is also a fundamental task for various applications such as autonomous driving. Compared to previous methods that utilize image representations, many recent researches build upon the power of deep analysis and focus on point clouds representation to conduct 3D flow estimation. This paper comprehensively reviews the pioneering literature in scene flow estimation based on point clouds. Meanwhile, it delves into detail in learning paradigms and presents insightful comparisons between the state‐of‐the‐art methods using deep learning for scene flow estimation. Furthermore, this paper investigates various higher‐level scene understanding tasks, including object tracking, motion segmentation, etc. and concludes with an overview of foreseeable research trends for scene flow estimation.Item ROI Scissor: Interactive Segmentation of Feature Region of Interest in a Triangular Mesh(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Moon, Ji‐Hye; Ha, Yujin; Park, Sanghun; Kim, Myung‐Soo; Yoon, Seung‐Hyun; Hauser, Helwig and Alliez, PierreWe present a simple and effective method for the interactive segmentation of feature regions in a triangular mesh. From the user‐specified radius and click position, the candidate region that contains the desired feature region is defined as geodesic disc on a triangle mesh. A concavity‐aware harmonic field is then computed on the candidate region using the appropriate boundary constraints. An initial isoline is chosen by evaluating the uniformly sampled ones on the harmonic field based on the gradient magnitude. A set of feature points on the initial isoline is selected and the anisotropic geodesics passing through them are then determined as the final segmentation boundary, which is smooth and locally shortest. The experimental results show several segmentation results for various 3D models, revealing the effectiveness of the proposed method.Item Texture Inpainting for Photogrammetric Models(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Maggiordomo, A.; Cignoni, P.; Tarini, M.; Hauser, Helwig and Alliez, PierreWe devise a technique designed to remove the texturing artefacts that are typical of 3D models representing real‐world objects, acquired by photogrammetric techniques. Our technique leverages the recent advancements in inpainting of natural colour images, adapting them to the specific context. A neural network, modified and trained for our purposes, replaces the texture areas containing the defects, substituting them with new plausible patches of texels, reconstructed from the surrounding surface texture. We train and apply the network model on locally reparametrized texture patches, so to provide an input that simplifies the learning process, because it avoids any texture seams, unused texture areas, background, depth jumps and so on. We automatically extract appropriate training data from real‐world datasets. We show two applications of the resulting method: one, as a fully automatic tool, addressing all problems that can be detected by analysing the UV‐map of the input model; and another, as an interactive semi‐automatic tool, presented to the user as a 3D ‘fixing’ brush that has the effect of removing artefacts from any zone the users paints on. We demonstrate our method on a variety of real‐world inputs and provide a reference usable implementation.Item Model‐based Crowd Behaviours in Human‐solution Space(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Xiang, Wei; Wang, He; Zhang, Yuqing; Yip, Milo K.; Jin, Xiaogang; Hauser, Helwig and Alliez, PierreRealistic crowd simulation has been pursued for decades, but it still necessitates tedious human labour and a lot of trial and error. The majority of currently used crowd modelling is either empirical (model‐based) or data‐driven (model‐free). Model‐based methods cannot fit observed data precisely, whereas model‐free methods are limited by the availability/quality of data and are uninterpretable. In this paper, we aim at taking advantage of both model‐based and data‐driven approaches. In order to accomplish this, we propose a new simulation framework built on a physics‐based model that is designed to be data‐friendly. Both the general prior knowledge about crowds encoded by the physics‐based model and the specific real‐world crowd data at hand jointly influence the system dynamics. With a multi‐granularity physics‐based model, the framework combines microscopic and macroscopic motion control. Each simulation step is formulated as an energy optimization problem, where the minimizer is the desired crowd behaviour. In contrast to traditional optimization‐based methods which seek the theoretical minimizer, we designed an acceleration‐aware data‐driven scheme to compute the minimizer from real‐world data in order to achieve higher realism by parameterizing both velocity and acceleration. Experiments demonstrate that our method can produce crowd animations that are more realistically behaved in a variety of scales and scenarios when compared to the earlier methods.Item EvIcon: Designing High‐Usability Icon with Human‐in‐the‐loop Exploration and IconCLIP(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Shen, I‐Chao; Cherng, Fu‐Yin; Igarashi, Takeo; Lin, Wen‐Chieh; Chen, Bing‐Yu; Hauser, Helwig and Alliez, PierreInterface icons are prevalent in various digital applications. Due to limited time and budgets, many designers rely on informal evaluation, which often results in poor usability icons. In this paper, we propose a unique human‐in‐the‐loop framework that allows our target users, that is novice and professional user interface (UI) designers, to improve the usability of interface icons efficiently. We formulate several usability criteria into a perceptual usability function and enable users to iteratively revise an icon set with an interactive design tool, EvIcon. We take a large‐scale pre‐trained joint image‐text embedding (CLIP) and fine‐tune it to embed icon visuals with icon tags in the same embedding space (IconCLIP). During the revision process, our design tool provides two types of instant perceptual usability feedback. First, we provide perceptual usability feedback modelled by deep learning models trained on IconCLIP embeddings and crowdsourced perceptual ratings. Second, we use the embedding space of IconCLIP to assist users in improving icons' visual distinguishability among icons within the user‐prepared icon set. To provide the perceptual prediction, we compiled , the first large‐scale dataset of perceptual usability ratings over 10,000 interface icons, by conducting a crowdsourcing study. We demonstrated that our framework could benefit UI designers' interface icon revision process with a wide range of professional experience. Moreover, the interface icons designed using our framework achieved better semantic distance and familiarity, verified by an additional online user study.Item Distributed Poisson Surface Reconstruction(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Kazhdan, M.; Hoppe, H.; Hauser, Helwig and Alliez, PierreScreened Poisson surface reconstruction robustly creates meshes from oriented point sets. For large datasets, the technique requires hours of computation and significant memory. We present a method to parallelize and distribute this computation over multiple commodity client nodes. The method partitions space on one axis into adaptively sized slabs containing balanced subsets of points. Because the Poisson formulation involves a global system, the challenge is to maintain seamless consistency at the slab boundaries and obtain a reconstruction that is indistinguishable from the serial result. To this end, we express the reconstructed indicator function as a sum of a low‐resolution term computed on a server and high‐resolution terms computed on distributed clients. Using a client–server architecture, we map the computation onto a sequence of serial server tasks and parallel client tasks, separated by synchronization barriers. This architecture also enables low‐memory evaluation on a single computer, albeit without speedup. We demonstrate a 700 million vertex reconstruction of the billion point David statue scan in less than 20 min on a 65‐node cluster with a maximum memory usage of 45 GB/node, or in 14 h on a single node.Item Visually Abstracting Event Sequences as Double Trees Enriched with Category‐Based Comparison(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Krause, Cedric; Agarwal, Shivam; Burch, Michael; Beck, Fabian; Hauser, Helwig and Alliez, PierreEvent sequence visualization aids analysts in many domains to better understand and infer new insights from event data. Analysing behaviour before or after a certain event of interest is a common task in many scenarios. In this paper, we introduce, formally define, and position as a domain‐agnostic tree visualization approach for this task. The visualization shows the sequences that led to the event of interest as a tree on the left, and those that followed on the right. Moreover, our approach enables users to create selections based on event attributes to interactively compare the events and sequences along colour‐coded categories. We integrate the double tree and category‐based comparison into a user interface for event sequence analysis. In three application examples, we show a diverse set of scenarios, covering short and long time spans, non‐spatial and spatial events, human and artificial actors, to demonstrate the general applicability of the approach.Item 3D Generative Model Latent Disentanglement via Local Eigenprojection(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Foti, Simone; Koo, Bongjin; Stoyanov, Danail; Clarkson, Matthew J.; Hauser, Helwig and Alliez, PierreDesigning realistic digital humans is extremely complex. Most data‐driven generative models used to simplify the creation of their underlying geometric shape do not offer control over the generation of local shape attributes. In this paper, we overcome this limitation by introducing a novel loss function grounded in spectral geometry and applicable to different neural‐network‐based generative models of 3D head and body meshes. Encouraging the latent variables of mesh variational autoencoders (VAEs) or generative adversarial networks (GANs) to follow the local eigenprojections of identity attributes, we improve latent disentanglement and properly decouple the attribute creation. Experimental results show that our local eigenprojection disentangled (LED) models not only offer improved disentanglement with respect to the state‐of‐the‐art, but also maintain good generation capabilities with training times comparable to the vanilla implementations of the models. Our code and pre‐trained models are available at .Item MesoGAN: Generative Neural Reflectance Shells(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Diolatzis, Stavros; Novak, Jan; Rousselle, Fabrice; Granskog, Jonathan; Aittala, Miika; Ramamoorthi, Ravi; Drettakis, George; Hauser, Helwig and Alliez, PierreWe introduce MesoGAN, a model for generative 3D neural textures. This new graphics primitive represents mesoscale appearance by combining the strengths of generative adversarial networks (StyleGAN) and volumetric neural field rendering. The primitive can be applied to surfaces as a neural reflectance shell; a thin volumetric layer above the surface with appearance parameters defined by a neural network. To construct the neural shell, we first generate a 2D feature texture using StyleGAN with carefully randomized Fourier features to support arbitrarily sized textures without repeating artefacts. We augment the 2D feature texture with a learned height feature, which aids the neural field renderer in producing volumetric parameters from the 2D texture. To facilitate filtering, and to enable end‐to‐end training within memory constraints of current hardware, we utilize a hierarchical texturing approach and train our model on multi‐scale synthetic datasets of 3D mesoscale structures. We propose one possible approach for conditioning MesoGAN on artistic parameters (e.g. fibre length, density of strands, lighting direction) and demonstrate and discuss integration into physically based renderers.Item Numerical Coarsening with Neural Shape Functions(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Ni, Ning; Xu, Qingyu; Li, Zhehao; Fu, Xiao‐Ming; Liu, Ligang; Hauser, Helwig and Alliez, PierreWe propose to use nonlinear shape functions represented as neural networks in numerical coarsening to achieve generalization capability as well as good accuracy. To overcome the challenge of generalization to different simulation scenarios, especially nonlinear materials under large deformations, our key idea is to replace the linear mapping between coarse and fine meshes adopted in previous works with a nonlinear one represented by neural networks. However, directly applying an end‐to‐end neural representation leads to poor performance due to over‐huge parameter space as well as failing to capture some intrinsic geometry properties of shape functions. Our solution is to embed geometry constraints as the prior knowledge in learning, which greatly improves training efficiency and inference robustness. With the trained neural shape functions, we can easily adopt numerical coarsening in the simulation of various hyperelastic models without any other preprocessing step required. The experiment results demonstrate the efficiency and generalization capability of our method over previous works.Item A Semi‐Procedural Convolutional Material Prior(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Zhou, Xilong; Hašan, Miloš; Deschaintre, Valentin; Guerrero, Paul; Sunkavalli, Kalyan; Kalantari, Nima Khademi; Hauser, Helwig and Alliez, PierreLightweight material capture methods require a material prior, defining the subspace of plausible textures within the large space of unconstrained texel grids. Previous work has either used deep neural networks (trained on large synthetic material datasets) or procedural node graphs (constructed by expert artists) as such priors. In this paper, we propose a semi‐procedural differentiable material prior that represents materials as a set of (typically procedural) grayscale noises and patterns that are processed by a sequence of lightweight learnable convolutional filter operations. We demonstrate that the restricted structure of this architecture acts as an inductive bias on the space of material appearances, allowing us to optimize the weights of the convolutions per‐material, with no need for pre‐training on a large dataset. Combined with a differentiable rendering step and a perceptual loss, we enable single‐image tileable material capture comparable with state of the art. Our approach does not target the pixel‐perfect recovery of the material, but rather uses noises and patterns as input to match the target appearance. To achieve this, it does not require complex procedural graphs, and has a much lower complexity, computational cost and storage cost. We also enable control over the results, through changing the provided patterns and using guide maps to push the material properties towards a user‐driven objective.Item Are We There Yet? A Roadmap of Network Visualization from Surveys to Task Taxonomies(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Filipov, Velitchko; Arleo, Alessio; Miksch, Silvia; Hauser, Helwig and Alliez, PierreNetworks are abstract and ubiquitous data structures, defined as a set of data points and relationships between them. Network visualization provides meaningful representations of these data, supporting researchers in understanding the connections, gathering insights, and detecting and identifying unexpected patterns. Research in this field is focusing on increasingly challenging problems, such as visualizing dynamic, complex, multivariate, and geospatial networked data. This ever‐growing, and widely varied, body of research led to several surveys being published, each covering one or more disciplines of network visualization. Despite this effort, the variety and complexity of this research represents an obstacle when surveying the domain and building a comprehensive overview of the literature. Furthermore, there exists a lack of clarification and uniformity between the terminology used in each of the surveys, which requires further effort when mapping and categorizing the plethora of different visualization techniques and approaches. In this paper, we aim at providing researchers and practitioners alike with a “roadmap” detailing the current research trends in the field of network visualization. We design our contribution as a meta‐survey where we discuss, summarize, and categorize recent surveys and task taxonomies published in the context of network visualization. We identify more and less saturated disciplines of research and consolidate the terminology used in the surveyed literature. We also survey the available task taxonomies, providing a comprehensive analysis of their varying support to each network visualization discipline and by establishing and discussing a classification for the individual tasks. With this combined analysis of surveys and task taxonomies, we provide an overarching structure of the field, from which we extrapolate the current state of research and promising directions for future work.Item Harmonized Portrait‐Background Image Composition(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Wang, Yijiang; Li, Yuqi; Wang, Chong; Ye, Xulun; Hauser, Helwig and Alliez, PierrePortrait‐background image composition is a widely used operation in selfie editing, video meeting, and other portrait applications. To guarantee the realism of the composited images, the appearance of the foreground portraits needs to be adjusted to fit the new background images. Existing image harmonization approaches are proposed to handle general foreground objects, thus lack the special ability to adjust portrait foregrounds. In this paper, we present a novel end‐to‐end network architecture to learn both the content features and style features for portrait‐background composition. The method adjusts the appearance of portraits to make them compatible with backgrounds, while the generation of the composited images satisfies the prior of a style‐based generator. We also propose a pipeline to generate high‐quality and high‐variety synthesized image datasets for training and evaluation. The proposed method outperforms other state‐of‐the‐art methods both on the synthesized dataset and the real composited images and shows robust performance in video applications.Item Adversarial Interactive Cartoon Sketch Colourization with Texture Constraint and Auxiliary Auto‐Encoder(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Liu, Xiaoyu; Zhu, Shaoqiang; Zeng, Yao; Zhang, Junsong; Hauser, Helwig and Alliez, PierreColouring cartoon sketches can help children develop their intellect and inspire their artistic creativity. Unlike photo colourization or anime line art colourization, cartoon sketch colourization is challenging due to the scarcity of texture information and the irregularity of the line structure, which is mainly reflected in the phenomenon of colour‐bleeding artifacts in generated images. We propose a colourization approach for cartoon sketches, which takes both sketches and colour hints as inputs to produce impressive images. To solve the problem of colour‐bleeding artifacts, we propose a multi‐discriminator colourization framework that introduces a texture discriminator in the conditional generative adversarial network (cGAN). Then we combined this framework with a pre‐trained auxiliary auto‐encoder, where an auxiliary feature loss is designed to further improve colour quality, and a condition input is introduced to increase the generalization ability over hand‐drawn sketches. We present both quantitative and qualitative evaluations, which prove the effectiveness of our proposed method. We test our method on sketches of varying complexity and structure, then build an interactive programme based on our model for user study. Experimental results demonstrate that the method generates natural and consistent colour images in real time from sketches drawn by non‐professionals.Item ARAP Revisited Discretizing the Elastic Energy using Intrinsic Voronoi Cells(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Finnendahl, Ugo; Schwartz, Matthias; Alexa, Marc; Hauser, Helwig and Alliez, PierreAs‐rigid‐as‐possible (ARAP) surface modelling is widely used for interactive deformation of triangle meshes. We show that ARAP can be interpreted as minimizing a discretization of an elastic energy based on non‐conforming elements defined over dual orthogonal cells of the mesh. Using the Voronoi cells rather than an orthogonal dual of the extrinsic mesh guarantees that the energy is non‐negative over each cell. We represent the intrinsic Delaunay edges extrinsically as polylines over the mesh, encoded in barycentric coordinates relative to the mesh vertices. This modification of the original ARAP energy, which we term , remedies problems stemming from non‐Delaunay edges in the original approach. Unlike the spokes‐and‐rims version of the ARAP approach it is less susceptible to the triangulation of the surface. We provide examples of deformations generated with iARAP and contrast them with other versions of ARAP. We also discuss the properties of the Laplace‐Beltrami operator implicitly introduced with the new discretization.Item Multi‐agent Path Planning with Heterogenous Interactions in Tight Spaces(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Modi, V.; Chen, Y.; Madan, A.; Sueda, S.; Levin, D. I. W.; Hauser, Helwig and Alliez, PierreBy starting with the assumption that motion is fundamentally a decision making problem, we use the world‐line concept from Special Relativity as the inspiration for a novel multi‐agent path planning method. We have identified a particular set of problems that have so far been overlooked by previous works. We present our solution for the global path planning problem for each agent and ensure smooth local collision avoidance for each pair of agents in the scene. We accomplish this by modelling the collision‐free trajectories of the agents through 2D space and time as rods in 3D. We obtain smooth trajectories by solving a non‐linear optimization problem with a quasi‐Newton interior point solver, initializing the solver with a non‐intersecting configuration from a modified Dijkstra's algorithm. This space–time formulation allows us to simulate previously ignored phenomena such as highly heterogeneous interactions in very constrained environments. It also provides a solution for scenes with unnaturally symmetric agent alignments without the need for jittering agent positions or velocities.Item Corrigendum to “Making Procedural Water Waves Boundary‐aware”, “Primal/Dual Descent Methods for Dynamics”, and “Detailed Rigid Body Simulation with Extended Position Based Dynamics”(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Hauser, Helwig and Alliez, PierreItem OaIF: Occlusion‐Aware Implicit Function for Clothed Human Re‐construction(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Tan, Yudi; Guan, Boliang; Zhou, Fan; Su, Zhuo; Hauser, Helwig and Alliez, PierreClothed human re‐construction from a monocular image is challenging due to occlusion, depth‐ambiguity and variations of body poses. Recently, shape representation based on an implicit function, compared to explicit representation such as mesh and voxel, is more capable with complex topology of clothed human. This is mainly achieved by using pixel‐aligned features, facilitating implicit function to capture local details. But such methods utilize an identical feature map for all sampled points to get local features, making their models occlusion‐agnostic in the encoding stage. The decoder, as implicit function, only maps features and does not take occlusion into account explicitly. Thus, these methods fail to generalize well in poses with severe self‐occlusion. To address this, we present OaIF to encode local features conditioned in visibility of SMPL vertices. OaIF projects SMPL vertices onto image plane to obtain image features masked by visibility. Vertices features integrated with geometry information of mesh are then feed into a GAT network to encode jointly. We query hybrid features and occlusion factors for points through cross attention and learn occupancy fields for clothed human. The experiments demonstrate that OaIF achieves more robust and accurate re‐construction than the state of the art on both public datasets and wild images.Item Triangle Influence Supersets for Fast Distance Computation(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Pujol, Eduard; Chica, Antonio; Hauser, Helwig and Alliez, PierreWe present an acceleration structure to efficiently query the Signed Distance Field (SDF) of volumes represented by triangle meshes. The method is based on a discretization of space. In each node, we store the triangles defining the SDF behaviour in that region. Consequently, we reduce the cost of the nearest triangle search, prioritizing query performance, while avoiding approximations of the field. We propose a method to conservatively compute the set of triangles influencing each node. Given a node, each triangle defines a region of space such that all points inside it are closer to a point in the node than the triangle is. This property is used to build the SDF acceleration structure. We do not need to explicitly compute these regions, which is crucial to the performance of our approach. We prove the correctness of the proposed method and compare it to similar approaches, confirming that our method produces faster query times than other exact methods.
- «
- 1 (current)
- 2
- 3
- »