43-Issue 6

Permanent URI for this collection

Editorial - Cover

Issue Information

ORIGINAL ARTICLES

DeforestVis: Behaviour Analysis of Machine Learning Models with Surrogate Decision Stumps

Chatzimparmpas, Angelos
Martins, Rafeal M.
Telea, Alexandru C.
Kerren, Andreas
ORIGINAL ARTICLES

MOTIV: Visual Exploration of Moral Framing in Social Media

Wentzel, A.
Levine, L.
Dhariwal, V.
Fatemi, Z.
Bhattacharya, A.
Eugenio, B. Di
Rojecki, A.
Zheleva, E.
Marai, G.E.
ORIGINAL ARTICLES

Interactive Visualization on Large High‐Resolution Displays: A Survey

Belkacem, Ilyasse
Tominski, Christian
Médoc, Nicolas
Knudsen, Søren
Dachselt, Raimund
Ghoniem, Mohammad
ORIGINAL ARTICLES

VolTeMorph: Real‐time, Controllable and Generalizable Animation of Volumetric Representations

Garbin, Stephan J.
Kowalski, Marek
Estellers, Virginia
Szymanowicz, Stanislaw
Rezaeifar, Shideh
Shen, Jingjing
Johnson, Matthew A.
Valentin, Julien
ORIGINAL ARTICLES

Artistic Style Transfer Based on Attention with Knowledge Distillation

Al‐Mekhlafi, Hanadi
Liu, Shiguang
ORIGINAL ARTICLES

Time‐varying Extremum Graphs

Das, Somenath
Sridharamurthy, Raghavendra
Natarajan, Vijay
ORIGINAL ARTICLES

A High‐Scalability Graph Modification System for Large‐Scale Networks

Xu, Shaobin
Sun, Minghui
Qin, Jun
ORIGINAL ARTICLES

Directional Texture Editing for 3D Models

Liu, Shengqi
Chen, Zhuo
Gao, Jingnan
Yan, Yichao
Zhu, Wenhan
Lyu, Jiangjing
Yang, Xiaokang
ORIGINAL ARTICLES

ETBHD‐HMF: A Hierarchical Multimodal Fusion Architecture for Enhanced Text‐Based Hair Design

He, Rong
Jiao, Ge
Li, Chen
ORIGINAL ARTICLES

Optimizing Surface Voxelization for Triangular Meshes with Equidistant Scanlines and Gap Detection

Delgado Díez, S.
Cerrada Somolinos, C.
Gómez Palomo, S. R.
ORIGINAL ARTICLES

Mix‐Max: A Content‐Aware Operator for Real‐Time Texture Transitions

Fournier, Romain
Sauvage, Basile
ORIGINAL ARTICLES

EBPVis: Visual Analytics of Economic Behavior Patterns in a Virtual Experimental Environment

Liu, Yuhua
Ma, Yuming
Shi, Qing
Wen, Jin
Zheng, Wanjun
Yue, Xuanwu
Ye, Hang
Chen, Wei
Meng, Yuwei
Zhou, Zhiguang
ORIGINAL ARTICLES

Hierarchical Spherical Cross‐Parameterization for Deforming Characters

Cao, Lizhou
Peng, Chao
Major Revision from Eurographics Conference

Real‐Time Polygonal Lighting of Iridescence Effect using Precomputed Monomial‐Gaussians

Liu, Zhengze
Huo, Yuchi
Yang, Yinhui
Chen, Jie
Wang, Rui
Major Revision from Eurographics Conference

A Hierarchical Architecture for Neural Materials

Xue, Bowen
Zhao, Shuang
Jensen, Henrik Wann
Montazeri, Zahra
Major Revision from Eurographics Conference

Infinite 3D Landmarks: Improving Continuous 2D Facial Landmark Detection

Chandran, P.
Zoss, G.
Gotardo, P.
Bradley, D.
Major Revision from Eurographics Conference

TraM‐NeRF: Tracing Mirror and Near‐Perfect Specular Reflections Through Neural Radiance Fields

Holland, Leif Van
Bliersbach, Ruben
Müller, Jan U.
Stotko, Patrick
Klein, Reinhard
Major Revision from Eurographics Conference

Evaluation in Neural Style Transfer: A Review

Ioannou, Eleftherios
Maddock, Steve
Major Revision from Eurographics Conference

Deep SVBRDF Acquisition and Modelling: A Survey

Kavoosighafi, Behnaz
Hajisharif, Saghi
Miandji, Ehsan
Baravdish, Gabriel
Cao, Wen
Unger, Jonas
Major Revision from Pacific Graphics

Deep and Fast Approximate Order Independent Transparency

Tsopouridis, Grigoris
Vasilakis, Andreas A.
Fudos, Ioannis
Major Revision from Pacific Graphics

PhysOM: Physarum polycephalum Oriented Microstructures

Garnier, David‐Henri
Schmidt, M. P.
Rohmer, Damien
Major Revision from Pacific Graphics

SMFS‐GAN: Style‐Guided Multi‐class Freehand Sketch‐to‐Image Synthesis

Cheng, Zhenwei
Wu, Lei
Li, Xiang
Meng, Xiangxu
Major Revision from EG Symposium on Rendering

Learned Inference of Annual Ring Pattern of Solid Wood

Larsson, Maria
Ijiri, Takashi
Shen, I‐Chao
Yoshida, Hironori
Shamir, Ariel
Igarashi, Takeo
Major Revision from EG Symposium on Rendering

Row–Column Separated Attention Based Low‐Light Image/Video Enhancement

Dong, Chengqi
Cao, Zhiyuan
Qi, Tuoshi
Wu, Kexin
Gao, Yixing
Tang, Fan
Major Revision from EuroVis Symposium

Evaluating Graph Layout Algorithms: A Systematic Review of Methods and Best Practices

Di Bartolomeo, Sara
Crnovrsanin, Tarik
Saffo, David
Puerta, Eduardo
Wilson, Connor
Dunne, Cody
CORRECTION

Correction to Real‐Time Neural Rendering of Dynamic Light Fields



BibTeX (43-Issue 6)
                
@article{
10.1111:cgf.14852,
journal = {Computer Graphics Forum}, title = {{
Issue Information}},
author = {}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.14852}
}
                
@article{
10.1111:cgf.15004,
journal = {Computer Graphics Forum}, title = {{
DeforestVis: Behaviour Analysis of Machine Learning Models with Surrogate Decision Stumps}},
author = {
Chatzimparmpas, Angelos
and
Martins, Rafeal M.
and
Telea, Alexandru C.
and
Kerren, Andreas
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15004}
}
                
@article{
10.1111:cgf.15072,
journal = {Computer Graphics Forum}, title = {{
MOTIV: Visual Exploration of Moral Framing in Social Media}},
author = {
Wentzel, A.
and
Levine, L.
and
Dhariwal, V.
and
Fatemi, Z.
and
Bhattacharya, A.
and
Eugenio, B. Di
and
Rojecki, A.
and
Zheleva, E.
and
Marai, G.E.
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15072}
}
                
@article{
10.1111:cgf.15001,
journal = {Computer Graphics Forum}, title = {{
Interactive Visualization on Large High‐Resolution Displays: A Survey}},
author = {
Belkacem, Ilyasse
and
Tominski, Christian
and
Médoc, Nicolas
and
Knudsen, Søren
and
Dachselt, Raimund
and
Ghoniem, Mohammad
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15001}
}
                
@article{
10.1111:cgf.15117,
journal = {Computer Graphics Forum}, title = {{
VolTeMorph: Real‐time, Controllable and Generalizable Animation of Volumetric Representations}},
author = {
Garbin, Stephan J.
and
Kowalski, Marek
and
Estellers, Virginia
and
Szymanowicz, Stanislaw
and
Rezaeifar, Shideh
and
Shen, Jingjing
and
Johnson, Matthew A.
and
Valentin, Julien
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15117}
}
                
@article{
10.1111:cgf.15127,
journal = {Computer Graphics Forum}, title = {{
Artistic Style Transfer Based on Attention with Knowledge Distillation}},
author = {
Al‐Mekhlafi, Hanadi
and
Liu, Shiguang
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15127}
}
                
@article{
10.1111:cgf.15162,
journal = {Computer Graphics Forum}, title = {{
Time‐varying Extremum Graphs}},
author = {
Das, Somenath
and
Sridharamurthy, Raghavendra
and
Natarajan, Vijay
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15162}
}
                
@article{
10.1111:cgf.15191,
journal = {Computer Graphics Forum}, title = {{
A High‐Scalability Graph Modification System for Large‐Scale Networks}},
author = {
Xu, Shaobin
and
Sun, Minghui
and
Qin, Jun
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15191}
}
                
@article{
10.1111:cgf.15196,
journal = {Computer Graphics Forum}, title = {{
Directional Texture Editing for 3D Models}},
author = {
Liu, Shengqi
and
Chen, Zhuo
and
Gao, Jingnan
and
Yan, Yichao
and
Zhu, Wenhan
and
Lyu, Jiangjing
and
Yang, Xiaokang
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15196}
}
                
@article{
10.1111:cgf.15194,
journal = {Computer Graphics Forum}, title = {{
ETBHD‐HMF: A Hierarchical Multimodal Fusion Architecture for Enhanced Text‐Based Hair Design}},
author = {
He, Rong
and
Jiao, Ge
and
Li, Chen
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15194}
}
                
@article{
10.1111:cgf.15195,
journal = {Computer Graphics Forum}, title = {{
Optimizing Surface Voxelization for Triangular Meshes with Equidistant Scanlines and Gap Detection}},
author = {
Delgado Díez, S.
and
Cerrada Somolinos, C.
and
Gómez Palomo, S. R.
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15195}
}
                
@article{
10.1111:cgf.15193,
journal = {Computer Graphics Forum}, title = {{
Mix‐Max: A Content‐Aware Operator for Real‐Time Texture Transitions}},
author = {
Fournier, Romain
and
Sauvage, Basile
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15193}
}
                
@article{
10.1111:cgf.15200,
journal = {Computer Graphics Forum}, title = {{
EBPVis: Visual Analytics of Economic Behavior Patterns in a Virtual Experimental Environment}},
author = {
Liu, Yuhua
and
Ma, Yuming
and
Shi, Qing
and
Wen, Jin
and
Zheng, Wanjun
and
Yue, Xuanwu
and
Ye, Hang
and
Chen, Wei
and
Meng, Yuwei
and
Zhou, Zhiguang
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15200}
}
                
@article{
10.1111:cgf.15197,
journal = {Computer Graphics Forum}, title = {{
Hierarchical Spherical Cross‐Parameterization for Deforming Characters}},
author = {
Cao, Lizhou
and
Peng, Chao
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15197}
}
                
@article{
10.1111:cgf.14991,
journal = {Computer Graphics Forum}, title = {{
Real‐Time Polygonal Lighting of Iridescence Effect using Precomputed Monomial‐Gaussians}},
author = {
Liu, Zhengze
and
Huo, Yuchi
and
Yang, Yinhui
and
Chen, Jie
and
Wang, Rui
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.14991}
}
                
@article{
10.1111:cgf.15116,
journal = {Computer Graphics Forum}, title = {{
A Hierarchical Architecture for Neural Materials}},
author = {
Xue, Bowen
and
Zhao, Shuang
and
Jensen, Henrik Wann
and
Montazeri, Zahra
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15116}
}
                
@article{
10.1111:cgf.15126,
journal = {Computer Graphics Forum}, title = {{
Infinite 3D Landmarks: Improving Continuous 2D Facial Landmark Detection}},
author = {
Chandran, P.
and
Zoss, G.
and
Gotardo, P.
and
Bradley, D.
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15126}
}
                
@article{
10.1111:cgf.15163,
journal = {Computer Graphics Forum}, title = {{
TraM‐NeRF: Tracing Mirror and Near‐Perfect Specular Reflections Through Neural Radiance Fields}},
author = {
Holland, Leif Van
and
Bliersbach, Ruben
and
Müller, Jan U.
and
Stotko, Patrick
and
Klein, Reinhard
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15163}
}
                
@article{
10.1111:cgf.15165,
journal = {Computer Graphics Forum}, title = {{
Evaluation in Neural Style Transfer: A Review}},
author = {
Ioannou, Eleftherios
and
Maddock, Steve
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15165}
}
                
@article{
10.1111:cgf.15199,
journal = {Computer Graphics Forum}, title = {{
Deep SVBRDF Acquisition and Modelling: A Survey}},
author = {
Kavoosighafi, Behnaz
and
Hajisharif, Saghi
and
Miandji, Ehsan
and
Baravdish, Gabriel
and
Cao, Wen
and
Unger, Jonas
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15199}
}
                
@article{
10.1111:cgf.15071,
journal = {Computer Graphics Forum}, title = {{
Deep and Fast Approximate Order Independent Transparency}},
author = {
Tsopouridis, Grigoris
and
Vasilakis, Andreas A.
and
Fudos, Ioannis
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15071}
}
                
@article{
10.1111:cgf.15075,
journal = {Computer Graphics Forum}, title = {{
PhysOM: Physarum polycephalum Oriented Microstructures}},
author = {
Garnier, David‐Henri
and
Schmidt, M. P.
and
Rohmer, Damien
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15075}
}
                
@article{
10.1111:cgf.15190,
journal = {Computer Graphics Forum}, title = {{
SMFS‐GAN: Style‐Guided Multi‐class Freehand Sketch‐to‐Image Synthesis}},
author = {
Cheng, Zhenwei
and
Wu, Lei
and
Li, Xiang
and
Meng, Xiangxu
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15190}
}
                
@article{
10.1111:cgf.15074,
journal = {Computer Graphics Forum}, title = {{
Learned Inference of Annual Ring Pattern of Solid Wood}},
author = {
Larsson, Maria
and
Ijiri, Takashi
and
Shen, I‐Chao
and
Yoshida, Hironori
and
Shamir, Ariel
and
Igarashi, Takeo
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15074}
}
                
@article{
10.1111:cgf.15192,
journal = {Computer Graphics Forum}, title = {{
Row–Column Separated Attention Based Low‐Light Image/Video Enhancement}},
author = {
Dong, Chengqi
and
Cao, Zhiyuan
and
Qi, Tuoshi
and
Wu, Kexin
and
Gao, Yixing
and
Tang, Fan
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15192}
}
                
@article{
10.1111:cgf.15073,
journal = {Computer Graphics Forum}, title = {{
Evaluating Graph Layout Algorithms: A Systematic Review of Methods and Best Practices}},
author = {
Di Bartolomeo, Sara
and
Crnovrsanin, Tarik
and
Saffo, David
and
Puerta, Eduardo
and
Wilson, Connor
and
Dunne, Cody
}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15073}
}
                
@article{
10.1111:cgf.15164,
journal = {Computer Graphics Forum}, title = {{
Correction to Real‐Time Neural Rendering of Dynamic Light Fields}},
author = {}, year = {
2024},
publisher = {
© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.15164}
}

Browse

Recent Submissions

Now showing 1 - 27 of 27
  • Item
    Issue Information
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Alliez, Pierre; Wimmer, Michael
  • Item
    DeforestVis: Behaviour Analysis of Machine Learning Models with Surrogate Decision Stumps
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Chatzimparmpas, Angelos; Martins, Rafeal M.; Telea, Alexandru C.; Kerren, Andreas; Alliez, Pierre; Wimmer, Michael
    As the complexity of machine learning (ML) models increases and their application in different (and critical) domains grows, there is a strong demand for more interpretable and trustworthy ML. A direct, model‐agnostic, way to interpret such models is to train surrogate models—such as rule sets and decision trees—that sufficiently approximate the original ones while being simpler and easier‐to‐explain. Yet, rule sets can become very lengthy, with many if–else statements, and decision tree depth grows rapidly when accurately emulating complex ML models. In such cases, both approaches can fail to meet their core goal—providing users with model interpretability. To tackle this, we propose DeforestVis, a visual analytics tool that offers summarization of the behaviour of complex ML models by providing surrogate decision stumps (one‐level decision trees) generated with the Adaptive Boosting (AdaBoost) technique. DeforestVis helps users to explore the complexity versus fidelity trade‐off by incrementally generating more stumps, creating attribute‐based explanations with weighted stumps to justify decision making, and analysing the impact of rule overriding on training instance allocation between one or more stumps. An independent test set allows users to monitor the effectiveness of manual rule changes and form hypotheses based on case‐by‐case analyses. We show the applicability and usefulness of DeforestVis with two use cases and expert interviews with data analysts and model developers.
  • Item
    MOTIV: Visual Exploration of Moral Framing in Social Media
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Wentzel, A.; Levine, L.; Dhariwal, V.; Fatemi, Z.; Bhattacharya, A.; Eugenio, B. Di; Rojecki, A.; Zheleva, E.; Marai, G.E.; Alliez, Pierre; Wimmer, Michael
    We present a visual computing framework for analysing moral rhetoric on social media around controversial topics. Using Moral Foundation Theory, we propose a methodology for deconstructing and visualizing the , and behind each of these moral dimensions as expressed in microblog data. We characterize the design of this framework, developed in collaboration with experts from language processing, communications and causal inference. Our approach integrates microblog data with multiple sources of geospatial and temporal data, and leverages unsupervised machine learning (generalized additive models) to support collaborative hypothesis discovery and testing. We implement this approach in a system named MOTIV. We illustrate this approach on two problems, one related to Stay‐at‐home policies during the COVID‐19 pandemic, and the other related to the Black Lives Matter movement. Through detailed case studies and discussions with collaborators, we identify several insights discovered regarding the different drivers of moral sentiment in social media. Our results indicate that this visual approach supports rapid, collaborative hypothesis testing, and can help give insights into the underlying moral values behind controversial political issues.Supplemental Material:
  • Item
    Interactive Visualization on Large High‐Resolution Displays: A Survey
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Belkacem, Ilyasse; Tominski, Christian; Médoc, Nicolas; Knudsen, Søren; Dachselt, Raimund; Ghoniem, Mohammad; Alliez, Pierre; Wimmer, Michael
    In the past few years, large high‐resolution displays (LHRDs) have attracted considerable attention from researchers, industries and application areas that increasingly rely on data‐driven decision‐making. An up‐to‐date survey on the use of LHRDs for interactive data visualization seems warranted to summarize how new solutions meet the characteristics and requirements of LHRDs and take advantage of their unique benefits. In this survey, we start by defining LHRDs and outlining the consequence of LHRD environments on interactive visualizations in terms of more pixels, space, users and devices. Then, we review related literature along the four axes of visualization, interaction, evaluation studies and applications. With these four axes, our survey provides a unique perspective and covers a broad range of aspects being relevant when developing interactive visual data analysis solutions for LHRDs. We conclude this survey by reflecting on a number of opportunities for future research to help the community take up the still‐open challenges of interactive visualization on LHRDs.
  • Item
    VolTeMorph: Real‐time, Controllable and Generalizable Animation of Volumetric Representations
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Garbin, Stephan J.; Kowalski, Marek; Estellers, Virginia; Szymanowicz, Stanislaw; Rezaeifar, Shideh; Shen, Jingjing; Johnson, Matthew A.; Valentin, Julien; Alliez, Pierre; Wimmer, Michael
    The recent increase in popularity of volumetric representations for scene reconstruction and novel view synthesis has put renewed focus on animating volumetric content at high visual quality and in real‐time. While implicit deformation methods based on learned functions can produce impressive results, they are ‘black boxes’ to artists and content creators, they require large amounts of training data to generalize meaningfully, and they do not produce realistic extrapolations outside of this data. In this work, we solve these issues by introducing a volume deformation method which is real‐time even for complex deformations, easy to edit with off‐the‐shelf software and can extrapolate convincingly. To demonstrate the versatility of our method, we apply it in two scenarios: physics‐based object deformation and telepresence where avatars are controlled using blendshapes. We also perform thorough experiments showing that our method compares favourably to both volumetric approaches combined with implicit deformation and methods based on mesh deformation.
  • Item
    Artistic Style Transfer Based on Attention with Knowledge Distillation
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Al‐Mekhlafi, Hanadi; Liu, Shiguang; Alliez, Pierre; Wimmer, Michael
    Artistic style transfer involves the adaption of an input image to reflect the style of a reference image while maintaining its original content. This technique, now a prominent focus due to its prospective use in creative fields like digital art and graphic design, typically applies normalization techniques and attention mechanisms. While these methods yield decent results, they often fall short due to distortion of content image details and non‐artefact styles. In this paper, we introduce a novel approach that synergizes adaptive instance normalization (AdaIN), attention mechanisms, knowledge distillation (KD) and strategically placed internal layers, and new enhancements designed to preserve content details and provide a nuanced control over the style transfer process. We introduce a Detail Enhancement Module to amplify high‐frequency details in the content image, enhancing edge and texture preservation. A Multi‐scale Strategy is implemented to ensure uniform style application across various detail levels, leading to more coherent stylization. The Content Feature Refinement process refines content features, sharpening and emphasizing details to preserve structural and textural integrity. AdaIN's distinctive feature of efficiently collecting style data is exploited in our approach, coupled with attention mechanisms' inherent ability to conserve content information. We supplement this blend with KD for the enhancement of model accuracy and efficiency. Additionally, the introduction of internal layers acts as a conduit to further improve the style transfer process, increasing the transfer level of features and fostering better stylized results. The cornerstone of our technique lies in preserving the content structure amidst complex style transfers. Experimental results affirm the superior performance of our method over existing techniques in both quantitative and qualitative evaluations.
  • Item
    Time‐varying Extremum Graphs
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Das, Somenath; Sridharamurthy, Raghavendra; Natarajan, Vijay; Alliez, Pierre; Wimmer, Michael
    We introduce time‐varying extremum graph (), a topological structure to support visualization and analysis of a time‐varying scalar field. The extremum graph is a sub‐structure of the Morse–Smale complex. It captures the adjacency relationship between cells in the Morse decomposition of a scalar field. We define the  as a time‐varying extension of the extremum graph and demonstrate how it captures salient feature tracks within a dynamic scalar field. We formulate the construction of the  as an optimization problem and describe an algorithm for computing the graph. We also demonstrate the capabilities of  towards identification and exploration of topological events such as deletion, generation, split and merge within a dynamic scalar field via comprehensive case studies including a viscous fingers and a 3D von Kármán vortex street dataset.
  • Item
    A High‐Scalability Graph Modification System for Large‐Scale Networks
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Xu, Shaobin; Sun, Minghui; Qin, Jun; Alliez, Pierre; Wimmer, Michael
    Modifying network results is the most intuitive way to inject domain knowledge into network detection algorithms to improve their performance. While advances in computation scalability have made detecting large‐scale networks possible, the human ability to modify such networks has not scaled accordingly, resulting in a huge ‘interaction gap’. Most existing works only support navigating and modifying edges one by one in a graph visualization, which causes a significant interaction burden when faced with large‐scale networks. In this work, we propose a novel graph pattern mining algorithm based on the minimum description length (MDL) principle to partition and summarize multi‐feature and isomorphic sub‐graph matches. The mined sub‐graph patterns can be utilized as mediums for modifying large‐scale networks. Combining two traditional approaches, we introduce a new coarse‐middle‐fine graph modification paradigm (. query graph‐based modification sub‐graph pattern‐based modification raw edge‐based modification). We further present a graph modification system that supports the graph modification paradigm for improving the scalability of modifying detected large‐scale networks. We evaluate the performance of our graph pattern mining algorithm through an experimental study, demonstrate the usefulness of our system through a case study, and illustrate the efficiency of our graph modification paradigm through a user study.
  • Item
    Directional Texture Editing for 3D Models
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Liu, Shengqi; Chen, Zhuo; Gao, Jingnan; Yan, Yichao; Zhu, Wenhan; Lyu, Jiangjing; Yang, Xiaokang; Alliez, Pierre; Wimmer, Michael
    Texture editing is a crucial task in 3D modelling that allows users to automatically manipulate the surface materials of 3D models. However, the inherent complexity of 3D models and the ambiguous text description lead to the challenge of this task. To tackle this challenge, we propose ITEM3D, a exture diting odel designed for automatic object editing according to the text nstructions. Leveraging the diffusion models and the differentiable rendering, ITEM3D takes the rendered images as the bridge between text and 3D representation and further optimizes the disentangled texture and environment map. Previous methods adopted the absolute editing direction, namely score distillation sampling (SDS) as the optimization objective, which unfortunately results in noisy appearances and text inconsistencies. To solve the problem caused by the ambiguous text, we introduce a relative editing direction, an optimization objective defined by the noise difference between the source and target texts, to release the semantic ambiguity between the texts and images. Additionally, we gradually adjust the direction during optimization to further address the unexpected deviation in the texture domain. Qualitative and quantitative experiments show that our ITEM3D outperforms the state‐of‐the‐art methods on various 3D objects. We also perform text‐guided relighting to show explicit control over lighting. Our project page: .
  • Item
    ETBHD‐HMF: A Hierarchical Multimodal Fusion Architecture for Enhanced Text‐Based Hair Design
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) He, Rong; Jiao, Ge; Li, Chen; Alliez, Pierre; Wimmer, Michael
    Text‐based hair design (TBHD) represents an innovative approach that utilizes text instructions for crafting hairstyle and colour, renowned for its flexibility and scalability. However, enhancing TBHD algorithms to improve generation quality and editing accuracy remains a current research difficulty. One important reason is that existing models fall short in alignment and fusion designs. Therefore, we propose a new layered multimodal fusion network called ETBHD‐HMF, which decouples the input image and hair text information into layered hair colour and hairstyle representations. Within this network, the channel enhancement separation (CES) module is proposed to enhance important signals and suppress noise for text representation obtained from CLIP, thus improving generation quality. Based on this, we develop the weighted mapping fusion (WMF) sub‐networks for hair colour and hairstyle. This sub‐network applies the mapper operations to input image and text representations, acquiring joint information. The WMF then selectively merges image representation and joint information from various style layers using weighted operations, ultimately achieving fine‐grained hairstyle designs. Additionally, to enhance editing accuracy and quality, we design a modality alignment loss to refine and optimize the information transmission and integration of the network. The experimental results of applying the network to the CelebA‐HQ dataset demonstrate that our proposed model exhibits superior overall performance in terms of generation quality, visual realism, and editing accuracy. ETBHD‐HMF (27.8 PSNR, 0.864 IDS) outperformed HairCLIP (26.9 PSNR, 0.828 IDS), with a 3% higher PSNR and a 4% higher IDS.
  • Item
    Optimizing Surface Voxelization for Triangular Meshes with Equidistant Scanlines and Gap Detection
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Delgado Díez, S.; Cerrada Somolinos, C.; Gómez Palomo, S. R.; Alliez, Pierre; Wimmer, Michael
    This paper presents an efficient algorithm for voxelizing the surface of triangular meshes in a single compute pass. The algorithm uses parallel equidistant lines to traverse the interior of triangles, minimizing costly memory operations and avoiding visiting the same voxels multiple times. By detecting and visiting only the voxels in each line operation, the proposed method achieves better performance results. This method incorporates a gap detection step, targeting areas where scanline‐based voxelization methods might fail. By selectively addressing these gaps, our method attains superior performance outcomes. Additionally, the algorithm is written entirely in a single compute GLSL shader, which makes it highly portable and vendor independent. Its simplicity also makes it easy to adapt and extend for various applications. The paper compares the results of this algorithm with other modern methods, comprehensibly comparing the time performance and resources used. Additionally, we introduce a novel metric, the ‘Slope Consistency Value’, which quantifies triangle orientation's impact on voxelization accuracy for scanline‐based approaches. The results show that the proposed solution outperforms existing, modern ones and obtains better results, especially in densely populated scenes with homogeneous triangle sizes and at higher resolutions.
  • Item
    Mix‐Max: A Content‐Aware Operator for Real‐Time Texture Transitions
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Fournier, Romain; Sauvage, Basile; Alliez, Pierre; Wimmer, Michael
    Mixing textures is a basic and ubiquitous operation in data‐driven algorithms for real‐time texture generation and rendering. It is usually performed either by linear blending, or by cutting. We propose a new mixing operator which encompasses and extends both, creating more complex transitions that adapt to the texture's contents. Our mixing operator takes as input two or more textures along with two or more priority maps, which encode how the texture patterns should interact. The resulting mixed texture is defined pixel‐wise by selecting the maximum of both priorities. We show that it integrates smoothly into two widespread applications: transition between two different textures, and texture synthesis that mixes pieces of the same texture. We provide constant‐time and parallel evaluation of the resulting mix over square footprints of MIP‐maps, making our operator suitable for real‐time rendering. We also develop a micro‐priority model, inspired by micro‐geometry models in rendering, which represents sub‐pixel priorities by a statistical distribution, and which allows for tuning between sharp cuts and smooth blend.
  • Item
    EBPVis: Visual Analytics of Economic Behavior Patterns in a Virtual Experimental Environment
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Liu, Yuhua; Ma, Yuming; Shi, Qing; Wen, Jin; Zheng, Wanjun; Yue, Xuanwu; Ye, Hang; Chen, Wei; Meng, Yuwei; Zhou, Zhiguang; Alliez, Pierre; Wimmer, Michael
    Experimental economics is an important branch of economics to study human behaviours in a controlled laboratory setting or out in the field. Scientific experiments are conducted in experimental economics to collect what decisions people make in specific circumstances and verify economic theories. As a significant couple of variables in the virtual experimental environment, decisions and outcomes change with the subjective factors of participants and objective circumstances, making it a difficult task to capture human behaviour patterns and establish correlations to verify economic theories. In this paper, we present a visual analytics system, , which enables economists to visually explore human behaviour patterns and faithfully verify economic theories, . the vicious cycle of poverty and poverty trap. We utilize a Doc2Vec model to transform the economic behaviours of participants into a vectorized space according to their sequential decisions, where frequent sequences can be easily perceived and extracted to represent human behaviour patterns. To explore the correlation between decisions and outcomes, an Outcome View is designed to display the outcome variables for behaviour patterns. We also provide a Comparison View to support an efficient comparison between multiple behaviour patterns by revealing their differences in terms of decision combinations and time‐varying profits. Moreover, an Individual View is designed to illustrate the outcome accumulation and behaviour patterns of subjects. Case studies, expert feedback and user studies based on a real‐world dataset have demonstrated the effectiveness and practicability of in the representation of economic behaviour patterns and certification of economic theories.
  • Item
    Hierarchical Spherical Cross‐Parameterization for Deforming Characters
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Cao, Lizhou; Peng, Chao; Alliez, Pierre; Wimmer, Michael
    The demand for immersive technology and realistic virtual environments has created a need for automated solutions to generate characters with morphological variations. However, existing approaches either rely on manual labour or oversimplify the problem by limiting it to static meshes or deformation transfers without shape morphing. In this paper, we propose a new cross‐parameterization approach that semi‐automates the generation of morphologically diverse characters with synthesized articulations and animations. The main contribution of this work is that our approach parameterizes deforming characters into a novel hierarchical multi‐sphere domain, while considering the attributes of mesh topology, deformation and animation. With such a multi‐sphere domain, our approach minimizes parametric distortion rates, enhances the bijectivity of parameterization and aligns deforming feature correspondences. The alignment process we propose allows users to focus only on major joint pairs, which is much simpler and more intuitive than the existing alignment solutions that involve a manual process of identifying feature points on mesh surfaces. Compared to recent works, our approach achieves high‐quality results in the applications of 3D morphing, texture transfer, character synthesis and deformation transfer.
  • Item
    Real‐Time Polygonal Lighting of Iridescence Effect using Precomputed Monomial‐Gaussians
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Liu, Zhengze; Huo, Yuchi; Yang, Yinhui; Chen, Jie; Wang, Rui; Alliez, Pierre; Wimmer, Michael
    The real world consists of mass phenomena, such as iridescence on thin film and metal oxide layers, that is only explicable by wave optics. Existing research can reproduce such effects with simple point lights or low‐frequency environmental lighting. However, it remains a difficult task to efficiently rendering these effects when near‐field, high‐frequency area lights are involved. This paper presents a high‐fidelity, real‐time rendering algorithm for the iridescence effect under polygonal lights. We introduce a novel set of spherical functions, Monomial‐Gaussians, to accurately fit iridescent materials' reflectance. With a precomputed lookup table, the Monomial‐Gaussians are easily integrated over spherical polygons in linear time. Importance sampling of Monomial‐Gaussians is also supported to efficiently reduce Monte‐Carlo error. Our approach produces accurate renderings of the iridescence effect while still preserving high frame rates.
  • Item
    A Hierarchical Architecture for Neural Materials
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Xue, Bowen; Zhao, Shuang; Jensen, Henrik Wann; Montazeri, Zahra; Alliez, Pierre; Wimmer, Michael
    Neural reflectance models are capable of reproducing the spatially‐varying appearance of many real‐world materials at different scales. Unfortunately, existing techniques such as NeuMIP have difficulties handling materials with strong shadowing effects or detailed specular highlights. In this paper, we introduce a neural appearance model that offers a new level of accuracy. Central to our model is an inception‐based core network structure that captures material appearances at multiple scales using parallel‐operating kernels and ensures multi‐stage features through specialized convolution layers. Furthermore, we encode the inputs into frequency space, introduce a gradient‐based loss, and employ it adaptive to the progress of the learning phase. We demonstrate the effectiveness of our method using a variety of synthetic and real examples.
  • Item
    Infinite 3D Landmarks: Improving Continuous 2D Facial Landmark Detection
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Chandran, P.; Zoss, G.; Gotardo, P.; Bradley, D.; Alliez, Pierre; Wimmer, Michael
    In this paper, we examine three important issues in the practical use of state‐of‐the‐art facial landmark detectors and show how a combination of specific architectural modifications can directly improve their accuracy and temporal stability. First, many facial landmark detectors require a face normalization step as a pre‐process, often accomplished by a separately trained neural network that crops and resizes the face in the input image. There is no guarantee that this pre‐trained network performs optimal face normalization for the task of landmark detection. Thus, we instead analyse the use of a spatial transformer network that is trained alongside the landmark detector in an unsupervised manner, jointly learning an optimal face normalization and landmark detection by a single neural network. Second, we show that modifying the output head of the landmark predictor to infer landmarks in a canonical 3D space rather than directly in 2D can further improve accuracy. To convert the predicted 3D landmarks into screen‐space, we additionally predict the camera intrinsics and head pose from the input image. As a side benefit, this allows to predict the 3D face shape from a given image only using 2D landmarks as supervision, which is useful in determining landmark visibility among other things. Third, when training a landmark detector on multiple datasets at the same time, annotation inconsistencies across datasets forces the network to produce a sub‐optimal average. We propose to add a semantic correction network to address this issue. This additional lightweight neural network is trained alongside the landmark detector, without requiring any additional supervision. While the insights of this paper can be applied to most common landmark detectors, we specifically target a recently proposed continuous 2D landmark detector to demonstrate how each of our additions leads to meaningful improvements over the state‐of‐the‐art on standard benchmarks.
  • Item
    TraM‐NeRF: Tracing Mirror and Near‐Perfect Specular Reflections Through Neural Radiance Fields
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Holland, Leif Van; Bliersbach, Ruben; Müller, Jan U.; Stotko, Patrick; Klein, Reinhard; Alliez, Pierre; Wimmer, Michael
    Implicit representations like neural radiance fields (NeRF) showed impressive results for photorealistic rendering of complex scenes with fine details. However, ideal or near‐perfectly specular reflecting objects such as mirrors, which are often encountered in various indoor scenes, impose ambiguities and inconsistencies in the representation of the re‐constructed scene leading to severe artifacts in the synthesized renderings. In this paper, we present a novel reflection tracing method tailored for the involved volume rendering within NeRF that takes these mirror‐like objects into account while avoiding the cost of straightforward but expensive extensions through standard path tracing. By explicitly modelling the reflection behaviour using physically plausible materials and estimating the reflected radiance with Monte‐Carlo methods within the volume rendering formulation, we derive efficient strategies for importance sampling and the transmittance computation along rays from only few samples. We show that our novel method enables the training of consistent representations of such challenging scenes and achieves superior results in comparison to previous state‐of‐the‐art approaches.
  • Item
    Evaluation in Neural Style Transfer: A Review
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Ioannou, Eleftherios; Maddock, Steve; Alliez, Pierre; Wimmer, Michael
    The field of neural style transfer (NST) has witnessed remarkable progress in the past few years, with approaches being able to synthesize artistic and photorealistic images and videos of exceptional quality. To evaluate such results, a diverse landscape of evaluation methods and metrics is used, including authors' opinions based on side‐by‐side comparisons, human evaluation studies that quantify the subjective judgements of participants, and a multitude of quantitative computational metrics which objectively assess the different aspects of an algorithm's performance. However, there is no consensus regarding the most suitable and effective evaluation procedure that can guarantee the reliability of the results. In this review, we provide an in‐depth analysis of existing evaluation techniques, identify the inconsistencies and limitations of current evaluation methods, and give recommendations for standardized evaluation practices. We believe that the development of a robust evaluation framework will not only enable more meaningful and fairer comparisons among NST methods but will also enhance the comprehension and interpretation of research findings in the field.
  • Item
    Deep SVBRDF Acquisition and Modelling: A Survey
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Kavoosighafi, Behnaz; Hajisharif, Saghi; Miandji, Ehsan; Baravdish, Gabriel; Cao, Wen; Unger, Jonas; Alliez, Pierre; Wimmer, Michael
    Hand in hand with the rapid development of machine learning, deep learning and generative AI algorithms and architectures, the graphics community has seen a remarkable evolution of novel techniques for material and appearance capture. Typically, these machine‐learning‐driven methods and technologies, in contrast to traditional techniques, rely on only a single or very few input images, while enabling the recovery of detailed, high‐quality measurements of bi‐directional reflectance distribution functions, as well as the corresponding spatially varying material properties, also known as Spatially Varying Bi‐directional Reflectance Distribution Functions (SVBRDFs). Learning‐based approaches for appearance capture will play a key role in the development of new technologies that will exhibit a significant impact on virtually all domains of graphics. Therefore, to facilitate future research, this State‐of‐the‐Art Report (STAR) presents an in‐depth overview of the state‐of‐the‐art in machine‐learning‐driven material capture in general, and focuses on SVBRDF acquisition in particular, due to its importance in accurately modelling complex light interaction properties of real‐world materials. The overview includes a categorization of current methods along with a summary of each technique, an evaluation of their functionalities, their complexity in terms of acquisition requirements, computational aspects and usability constraints. The STAR is concluded by looking forward and summarizing open challenges in research and development toward predictive and general appearance capture in this field. A complete list of the methods and papers reviewed in this survey is available at .
  • Item
    Deep and Fast Approximate Order Independent Transparency
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Tsopouridis, Grigoris; Vasilakis, Andreas A.; Fudos, Ioannis; Alliez, Pierre; Wimmer, Michael
    We present a machine learning approach for efficiently computing order independent transparency (OIT) by deploying a light weight neural network implemented fully on shaders. Our method is fast, requires a small constant amount of memory (depends only on the screen resolution and not on the number of triangles or transparent layers), is more accurate as compared to previous approximate methods, works for every scene without setup and is portable to all platforms running even with commodity GPUs. Our method requires a rendering pass to extract all features that are subsequently used to predict the overall OIT pixel colour with a pre‐trained neural network. We provide a comparative experimental evaluation and shader source code of all methods for reproduction of the experiments.
  • Item
    PhysOM: Physarum polycephalum Oriented Microstructures
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Garnier, David‐Henri; Schmidt, M. P.; Rohmer, Damien; Alliez, Pierre; Wimmer, Michael
    Biological shapes possess fascinating properties and behaviours that are the result of emergent mechanisms: they can evolve over time, dynamically adapt to changes in their environment, while also exhibiting interesting mechanical properties and aesthetic appeal. In this work, we bring and extend an existing biological‐inspired model of the , aka , to the field of computer graphics, in order to design porous organic‐like microstructures that resemble natural foam‐like cells or filament‐like patterns with variable local properties. In contrast to approaches based on static global optimization that provides only limited expressivity over the result, our method allows precise control over the local orientation of 3D patterns, relative cell extension and precise infill of shapes with well defined boundaries. To this end, we extend the classical agent‐based model for Physarum to fill an arbitrary domain with local anisotropic behaviour. We further provide a detailed analysis of the model parameters, contributing to the understanding of the system behaviour. The method is fast, parallelizable and scalable to large volumes and compatible with user interaction, allowing a designer to guide the structure, erase parts and observe its evolution in real‐time. Overall, our method provides a versatile and efficient means of generating intricate organic microstructures that have potential applications in fields such as additive manufacturing, design or biological representation and engineering.
  • Item
    SMFS‐GAN: Style‐Guided Multi‐class Freehand Sketch‐to‐Image Synthesis
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Cheng, Zhenwei; Wu, Lei; Li, Xiang; Meng, Xiangxu; Alliez, Pierre; Wimmer, Michael
    Freehand sketch‐to‐image (S2I) is a challenging task due to the individualized lines and the random shape of freehand sketches. The multi‐class freehand sketch‐to‐image synthesis task, in turn, presents new challenges for this research area. This task requires not only the consideration of the problems posed by freehand sketches but also the analysis of multi‐class domain differences in the conditions of a single model. However, existing methods often have difficulty learning domain differences between multiple classes, and cannot generate controllable and appropriate textures while maintaining shape stability. In this paper, we propose a style‐guided multi‐class freehand sketch‐to‐image synthesis model, SMFS‐GAN, which can be trained using only unpaired data. To this end, we introduce a contrast‐based style encoder that optimizes the network's perception of domain disparities by explicitly modelling the differences between classes and thus extracting style information across domains. Further, to optimize the fine‐grained texture of the generated results and the shape consistency with freehand sketches, we propose a local texture refinement discriminator and a Shape Constraint Module, respectively. In addition, to address the imbalance of data classes in the QMUL‐Sketch dataset, we add 6K images by drawing manually and obtain QMUL‐Sketch+ dataset. Extensive experiments on SketchyCOCO Object dataset, QMUL‐Sketch+ dataset and Pseudosketches dataset demonstrate the effectiveness as well as the superiority of our proposed method.
  • Item
    Learned Inference of Annual Ring Pattern of Solid Wood
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Larsson, Maria; Ijiri, Takashi; Shen, I‐Chao; Yoshida, Hironori; Shamir, Ariel; Igarashi, Takeo; Alliez, Pierre; Wimmer, Michael
    We propose a method for inferring the internal anisotropic volumetric texture of a given wood block from annotated photographs of its external surfaces. The global structure of the annual ring pattern is represented using a continuous spatial scalar field referred to as the growth time field (GTF). First, we train a generic neural model that can represent various GTFs using procedurally generated training data. Next, we fit the generic model to the GTF of a given wood block based on surface annotations. Finally, we convert the GTF to an annual ring field (ARF) revealing the layered pattern and apply neural style transfer to render orientation‐dependent small‐scale features and colors on a cut surface. We show rendered results of various physically cut real wood samples. Our method has physical and virtual applications such as cut‐preview before subtractive fabricating solid wood artifacts and simulating object breaking.
  • Item
    Row–Column Separated Attention Based Low‐Light Image/Video Enhancement
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Dong, Chengqi; Cao, Zhiyuan; Qi, Tuoshi; Wu, Kexin; Gao, Yixing; Tang, Fan; Alliez, Pierre; Wimmer, Michael
    U‐Net structure is widely used for low‐light image/video enhancement. The enhanced images result in areas with large local noise and loss of more details without proper guidance for global information. Attention mechanisms can better focus on and use global information. However, attention to images could significantly increase the number of parameters and computations. We propose a Row–Column Separated Attention module (RCSA) inserted after an improved U‐Net. The RCSA module's input is the mean and maximum of the row and column of the feature map, which utilizes global information to guide local information with fewer parameters. We propose two temporal loss functions to apply the method to low‐light video enhancement and maintain temporal consistency. Extensive experiments on the LOL, MIT Adobe FiveK image, and SDSD video datasets demonstrate the effectiveness of our approach.
  • Item
    Evaluating Graph Layout Algorithms: A Systematic Review of Methods and Best Practices
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Di Bartolomeo, Sara; Crnovrsanin, Tarik; Saffo, David; Puerta, Eduardo; Wilson, Connor; Dunne, Cody; Alliez, Pierre; Wimmer, Michael
    Evaluations—encompassing computational evaluations, benchmarks and user studies—are essential tools for validating the performance and applicability of graph and network layout algorithms (also known as graph drawing). These evaluations not only offer significant insights into an algorithm's performance and capabilities, but also assist the reader in determining if the algorithm is suitable for a specific purpose, such as handling graphs with a high volume of nodes or dense graphs. Unfortunately, there is no standard approach for evaluating layout algorithms. Prior work holds a ‘Wild West’ of diverse benchmark datasets and data characteristics, as well as varied evaluation metrics and ways to report results. It is often difficult to compare layout algorithms without first implementing them and then running your own evaluation. In this systematic review, we delve into the myriad of methodologies employed to conduct evaluations—the utilized techniques, reported outcomes and the pros and cons of choosing one approach over another. Our examination extends beyond computational evaluations, encompassing user‐centric evaluations, thus presenting a comprehensive understanding of algorithm validation. This systematic review—and its accompanying website—guides readers through evaluation types, the types of results reported, and the available benchmark datasets and their data characteristics. Our objective is to provide a valuable resource for readers to understand and effectively apply various evaluation methods for graph layout algorithms. A free copy of this paper and all supplemental material is available at , and the categorized papers are accessible on our website at .
  • Item
    Correction to Real‐Time Neural Rendering of Dynamic Light Fields
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Alliez, Pierre; Wimmer, Michael