44-Issue 6

Permanent URI for this collection

Issue Information

Issue Information

Original Article

Augury and Forerunner: Real-Time Feedback Via Predictive Numerical Optimization and Input Prediction

Graus, J.
Gingold, Y.
Original Article

Dynamic Cutting Simulation Using Elastic Snapping for Mesh Quality Optimization

Zeng, Z.
Courtecuisse, H.
Original Article

NCD: Normal-Guided Chamfer Distance Loss for Watertight Mesh Reconstruction from Unoriented Point Clouds

Li, Jiaxin
Tan, Jiawei
Ou, Zhilong
Wang, Hongxing
Original Article

Gaze-Aware Visualisation: Design Considerations and Research Agenda

Jianu, Radu
Silva, Nelson
Rodrigues, Nils
Blascheck, Tanja
Schreck, Tobias
Weiskopf, Daniel
Original Article

LEAD: Latent Realignment for Human Motion Diffusion

Andreou, Nefeli
Wang, Xi
Fernández Abrevaya, Victoria
Cani, Marie-Paule
Chrysanthou, Yiorgos
Kalogeiton, Vicky
Original Article

Self-Calibrating Fisheye Lens Aberrations for Novel View Synthesis

Xiang, Jinhui
Li, Yuqi
Li, Jiabao
Zheng, Wenxing
Fu, Qiang
Original Article

Adaptive and Iterative Point Cloud Denoising with Score-Based Diffusion Model

Wang, Zhaonan
Li, Manyi
Xin, Shiqing
Tu, Changhe
Original Article

MARV: Multiview Augmented Reality Visualisation for Exploring Rich Material Data

Gall, Alexander
Heim, Anja
Gröller, Eduard
Heinzl, Christoph
Original Article

Optimal Dimensionality Selection Using Hull Heatmaps for Single-Cell Analysis

Jeong, Haejin
Jeong, Hyoung-oh
Lee, Semin
Jeong, Won-Ki
Original Article

MPACT: Mesoscopic Profiling and Abstraction of Crowd Trajectories

Lemonari, Marilena
Panayiotou, Andreas
Kyriakou, Theodoros
Pelechano, Nuria
Chrysanthou, Yiorgos
Aristidou, Andreas
Charalambous, Panayiotis
Original Article

Exploratory Analysis of Scientific Publications for University Governance

Gràcia, A.
Padró, L.
Alarcon, E.
Vázquez, P.
Original Article

Vector-Based Terrain Modelling

Perche, Simon
Guérin, Eric
Galin, Eric
Peytavie, Adrien
Original Article

AI-ChartParser: A Method For Extracting Experimental Data From Curve Charts in Academic Papers

Yang, Wenjin
He, Jie
Zhang, Xiaotong
Gong, Haiyan
Original Article

Comparative Study of Four Visualization Techniques and Positional Variations for Displaying Exercise Data on Smartwatches

Liu, Yu
Xia, Zhouxuan
Du, Jinyuan
Original Article

3DGM: Deformable and Texturable 3D Gaussian Model via Level-of-Detail Proxy

Wang, Xiangzhi Eric
Sin, Zackary P. T.
Original Article

Hi3DFace: High-Realistic 3D Face Reconstruction From a Single Occluded Image

Huang, Dongjin
Shi, Yongsheng
Qu, Jiantao
Liu, Jinhua
Tang, Wen
Original Article

Real-Time Neural Denoising for Volume Rendering Using Dual-Input Feature Fusion Network

Xu, Chunxiao
Xu, Xinran
Zhang, Jiatian
Liu, Yufei
Cao, Yiheng
Zhao, Lingxiao
Major Revision from Eurographics Conference

EyeExpand: A Low-Burden and Accurate 3D Object Selection Method With Gaze and Raycasting

Xu, X.
He, Y.
Ge, Y.
Zheng, Z.
Major Revision from Eurographics Conference

Theoretical Model Validation of the Multisensory Role on Subjective Realism, Presence and Involvement in Immersive Virtual Reality

Gonçalves, Guilherme
Peixoto, Bruno
Melo, Miguel
Bessa, Maximino
Major Revision from Eurographics Conference

Real-Time and Controllable Reactive Motion Synthesis via Intention Guidance

Zhang, Xiaotang
Chang, Ziyi
Men, Qianhui
Shum, Hubert P. H.
Major Revision from Eurographics Conference

Herds From Video: Learning a Microscopic Herd Model From Macroscopic Motion Data

Gong, Xianjin
Gain, James
Rohmer, Damien
Lyonnet, Sixtine
Pettré, Julien
Cani, Marie-Paule
Major Revision from EuroVis Symposium

GeoDEN: A Visual Exploration Tool for Analyzing the Geographic Spread of Dengue Serotypes

Marler, Aidan
Roell, Yannik
Knoblauch, Steffen
Messina Jane, P.
Jaenisch, Thomas
Karimzadeh, Mohammad
Major Revision from Pacific Graphics

Self-Supervised Image Harmonization via Region-Aware Harmony Classification

Tian, Chenyang
Wang, Xinbo
Zhang, Qing
Correction

Correction to 'Antarstick: Extracting Snow Height From Time-Lapse Photography'

Lang, M.
Mráz, R.
Trtík, M.
Stoppel, S.
Byška, J.
Kozlíková, B.


BibTeX (44-Issue 6)
                
@article{
10.1111:cgf.15123,
journal = {Computer Graphics Forum}, title = {{
Issue Information}},
author = {}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15123}
}
                
@article{
10.1111:cgf.70091,
journal = {Computer Graphics Forum}, title = {{
Augury and Forerunner: Real-Time Feedback Via Predictive Numerical Optimization and Input Prediction}},
author = {
Graus, J.
and
Gingold, Y.
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70091}
}
                
@article{
10.1111:cgf.70005,
journal = {Computer Graphics Forum}, title = {{
Dynamic Cutting Simulation Using Elastic Snapping for Mesh Quality Optimization}},
author = {
Zeng, Z.
and
Courtecuisse, H.
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70005}
}
                
@article{
10.1111:cgf.70088,
journal = {Computer Graphics Forum}, title = {{
NCD: Normal-Guided Chamfer Distance Loss for Watertight Mesh Reconstruction from Unoriented Point Clouds}},
author = {
Li, Jiaxin
and
Tan, Jiawei
and
Ou, Zhilong
and
Wang, Hongxing
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70088}
}
                
@article{
10.1111:cgf.70097,
journal = {Computer Graphics Forum}, title = {{
Gaze-Aware Visualisation: Design Considerations and Research Agenda}},
author = {
Jianu, Radu
and
Silva, Nelson
and
Rodrigues, Nils
and
Blascheck, Tanja
and
Schreck, Tobias
and
Weiskopf, Daniel
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70097}
}
                
@article{
10.1111:cgf.70093,
journal = {Computer Graphics Forum}, title = {{
LEAD: Latent Realignment for Human Motion Diffusion}},
author = {
Andreou, Nefeli
and
Wang, Xi
and
Fernández Abrevaya, Victoria
and
Cani, Marie-Paule
and
Chrysanthou, Yiorgos
and
Kalogeiton, Vicky
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70093}
}
                
@article{
10.1111:cgf.70148,
journal = {Computer Graphics Forum}, title = {{
Self-Calibrating Fisheye Lens Aberrations for Novel View Synthesis}},
author = {
Xiang, Jinhui
and
Li, Yuqi
and
Li, Jiabao
and
Zheng, Wenxing
and
Fu, Qiang
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70148}
}
                
@article{
10.1111:cgf.70149,
journal = {Computer Graphics Forum}, title = {{
Adaptive and Iterative Point Cloud Denoising with Score-Based Diffusion Model}},
author = {
Wang, Zhaonan
and
Li, Manyi
and
Xin, Shiqing
and
Tu, Changhe
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70149}
}
                
@article{
10.1111:cgf.70150,
journal = {Computer Graphics Forum}, title = {{
MARV: Multiview Augmented Reality Visualisation for Exploring Rich Material Data}},
author = {
Gall, Alexander
and
Heim, Anja
and
Gröller, Eduard
and
Heinzl, Christoph
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70150}
}
                
@article{
10.1111:cgf.70151,
journal = {Computer Graphics Forum}, title = {{
Optimal Dimensionality Selection Using Hull Heatmaps for Single-Cell Analysis}},
author = {
Jeong, Haejin
and
Jeong, Hyoung-oh
and
Lee, Semin
and
Jeong, Won-Ki
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70151}
}
                
@article{
10.1111:cgf.70156,
journal = {Computer Graphics Forum}, title = {{
MPACT: Mesoscopic Profiling and Abstraction of Crowd Trajectories}},
author = {
Lemonari, Marilena
and
Panayiotou, Andreas
and
Kyriakou, Theodoros
and
Pelechano, Nuria
and
Chrysanthou, Yiorgos
and
Aristidou, Andreas
and
Charalambous, Panayiotis
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70156}
}
                
@article{
10.1111:cgf.70158,
journal = {Computer Graphics Forum}, title = {{
Exploratory Analysis of Scientific Publications for University Governance}},
author = {
Gràcia, A.
and
Padró, L.
and
Alarcon, E.
and
Vázquez, P.
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70158}
}
                
@article{
10.1111:cgf.70160,
journal = {Computer Graphics Forum}, title = {{
Vector-Based Terrain Modelling}},
author = {
Perche, Simon
and
Guérin, Eric
and
Galin, Eric
and
Peytavie, Adrien
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70160}
}
                
@article{
10.1111:cgf.70146,
journal = {Computer Graphics Forum}, title = {{
AI-ChartParser: A Method For Extracting Experimental Data From Curve Charts in Academic Papers}},
author = {
Yang, Wenjin
and
He, Jie
and
Zhang, Xiaotong
and
Gong, Haiyan
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70146}
}
                
@article{
10.1111:cgf.70224,
journal = {Computer Graphics Forum}, title = {{
Comparative Study of Four Visualization Techniques and Positional Variations for Displaying Exercise Data on Smartwatches}},
author = {
Liu, Yu
and
Xia, Zhouxuan
and
Du, Jinyuan
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70224}
}
                
@article{
10.1111:cgf.70223,
journal = {Computer Graphics Forum}, title = {{
3DGM: Deformable and Texturable 3D Gaussian Model via Level-of-Detail Proxy}},
author = {
Wang, Xiangzhi Eric
and
Sin, Zackary P. T.
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70223}
}
                
@article{
10.1111:cgf.70277,
journal = {Computer Graphics Forum}, title = {{
Hi3DFace: High-Realistic 3D Face Reconstruction From a Single Occluded Image}},
author = {
Huang, Dongjin
and
Shi, Yongsheng
and
Qu, Jiantao
and
Liu, Jinhua
and
Tang, Wen
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70277}
}
                
@article{
10.1111:cgf.70276,
journal = {Computer Graphics Forum}, title = {{
Real-Time Neural Denoising for Volume Rendering Using Dual-Input Feature Fusion Network}},
author = {
Xu, Chunxiao
and
Xu, Xinran
and
Zhang, Jiatian
and
Liu, Yufei
and
Cao, Yiheng
and
Zhao, Lingxiao
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70276}
}
                
@article{
10.1111:cgf.70144,
journal = {Computer Graphics Forum}, title = {{
EyeExpand: A Low-Burden and Accurate 3D Object Selection Method With Gaze and Raycasting}},
author = {
Xu, X.
and
He, Y.
and
Ge, Y.
and
Zheng, Z.
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70144}
}
                
@article{
10.1111:cgf.70145,
journal = {Computer Graphics Forum}, title = {{
Theoretical Model Validation of the Multisensory Role on Subjective Realism, Presence and Involvement in Immersive Virtual Reality}},
author = {
Gonçalves, Guilherme
and
Peixoto, Bruno
and
Melo, Miguel
and
Bessa, Maximino
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70145}
}
                
@article{
10.1111:cgf.70222,
journal = {Computer Graphics Forum}, title = {{
Real-Time and Controllable Reactive Motion Synthesis via Intention Guidance}},
author = {
Zhang, Xiaotang
and
Chang, Ziyi
and
Men, Qianhui
and
Shum, Hubert P. H.
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70222}
}
                
@article{
10.1111:cgf.70225,
journal = {Computer Graphics Forum}, title = {{
Herds From Video: Learning a Microscopic Herd Model From Macroscopic Motion Data}},
author = {
Gong, Xianjin
and
Gain, James
and
Rohmer, Damien
and
Lyonnet, Sixtine
and
Pettré, Julien
and
Cani, Marie-Paule
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70225}
}
                
@article{
10.1111:cgf.70087,
journal = {Computer Graphics Forum}, title = {{
GeoDEN: A Visual Exploration Tool for Analyzing the Geographic Spread of Dengue Serotypes}},
author = {
Marler, Aidan
and
Roell, Yannik
and
Knoblauch, Steffen
and
Messina Jane, P.
and
Jaenisch, Thomas
and
Karimzadeh, Mohammad
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70087}
}
                
@article{
10.1111:cgf.70157,
journal = {Computer Graphics Forum}, title = {{
Self-Supervised Image Harmonization via Region-Aware Harmony Classification}},
author = {
Tian, Chenyang
and
Wang, Xinbo
and
Zhang, Qing
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70157}
}
                
@article{
10.1111:cgf.70006,
journal = {Computer Graphics Forum}, title = {{
Correction to 'Antarstick: Extracting Snow Height From Time-Lapse Photography'}},
author = {
Lang, M.
and
Mráz, R.
and
Trtík, M.
and
Stoppel, S.
and
Byška, J.
and
Kozlíková, B.
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70006}
}

Browse

Recent Submissions

Now showing 1 - 25 of 25
  • Item
    Issue Information
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    Information page for issue 44(6) of Computer Graphics Forum, published in September 2025.
  • Item
    Augury and Forerunner: Real-Time Feedback Via Predictive Numerical Optimization and Input Prediction
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Graus, J.; Gingold, Y.; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    In many interactive systems, user input initializes and launches an iterative optimization procedure. The goal is to provide assistive feedback to some creation/editing process. Examples include constraint-based GUI layout and complex snapping scenarios. Many geometric problems, such as fitting a shape to data, involve optimizations which may take seconds to complete (or even longer), yet require human guidance. In order to make these optimization routines practical in interactive sessions, simplifications or sacrifices must be made. Canonically, non-convex optimization problems are solved iteratively by taking a series of steps towards a solution. By their nature, there are many locally optimal solutions; which solution is found is highly dependent on an initial guess. There is a fundamental conflict between optimization and interactivity. Interrupting and restarting the optimization every time the user, e.g. moves the mouse prevents any solution from being computed until the user ceases interaction. Continuing to run the optimization procedure computes a perpetually outdated solution. This presents a particular unsolved challenge with respect to direct manipulation. Every time the user, e.g. moves the mouse, the entire optimization must be re-started with the new user input, since returning a stale result associated with the previous user state is undesirable. We propose predictive short-circuiting to reduce this fundamental tension. Our approach memoizes paths in the optimization's configuration space and predicts the trajectory of future optimization in real time, leveraging common C1 continuity assumptions. This enables direct manipulation of formerly sluggish interactions. We demonstrate our approach on geometric fitting tasks. Additionally, we evaluate complementary mouse motion prediction algorithms as a means to discard or skip optimization problems that are irrelevant to the user's intended initial configuration for a targeted optimization procedure. Predicting where the mouse cursor will be located at the end of an operation, such as dragging a model of an engine component into scanned point cloud data to perform geometric alignment, allows us to pre-emptively begin solving the targeted problem before the user finishes their movement. We take advantage of the fact that the prediction indicates the approximate energy basin the optimization procedure will need to explore.
  • Item
    Dynamic Cutting Simulation Using Elastic Snapping for Mesh Quality Optimization
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Zeng, Z.; Courtecuisse, H.; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    In this manuscript, we present a novel cutting method that involves using a vertex-snapping strategy to fit the boundary surface onto the cutting path while avoiding generating new elements. We employ a point cloud with polynomial fitting to generate the cutting path, allowing for operation with unscheduled cuts and potential perturbations. Efficient geometry operations are developed to handle topological changes during progressive cutting. While it is challenging to optimize the mesh quality and accurately align the cut surface with the cutting path, we propose an innovative strategy that converts this geometric problem into a quasi-static elastic problem. This involves solving a constrained elastic problem within an auxiliary simulation, where the system optimizes the mesh quality when reaching equilibrium. Furthermore, we propose modifications to a GPU-based matrix-free solver, enabling efficient updates of the precomputed data stored in the GPU memory and thus ensuring real-time performance.
  • Item
    NCD: Normal-Guided Chamfer Distance Loss for Watertight Mesh Reconstruction from Unoriented Point Clouds
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Li, Jiaxin; Tan, Jiawei; Ou, Zhilong; Wang, Hongxing; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    As a widely used loss function in learnable watertight mesh reconstruction from unoriented point clouds, Chamfer Distance (CD) efficiently quantifies the alignment between the sampled point cloud from the reconstructed mesh and its corresponding input point cloud. Occasionally, to enhance reconstruction fidelity, CD incorporates a normal consistency term, albeit at the cost of efficiency. In this context, normal estimation for unoriented point clouds requires computationally intensive matrix decomposition or specialized pre-trained models, whereas deriving normals for mesh-sampled points can be readily achieved using the cross product of mesh vertices. However, the reconstruction models employing CD and its variants typically rely solely on the spatial coordinates of the points, which omits normal information in favor of efficiency and deployability. To tackle this challenge, we propose a novel loss function for watertight mesh reconstruction from unoriented point clouds, termed Normal-guided Chamfer Distance (NCD). Building upon CD, NCD introduces a normal-steered weighting mechanism based on the angle between the normal at each mesh-sampled point and the vector to its corresponding input point, offering several advantages: (i) it leverages readily available mesh-sampled point normals to weight coordinate-based Euclidean distances, thus extending the capability of CD; (ii) it eliminates the need for normal estimation from input unoriented point clouds; (iii) it incurs a negligible increase in computational complexity compared to CD. We employ NCD as the training loss for point-to-mesh reconstruction with multiple models and initial watertight meshes on benchmark datasets, demonstrating its superiority over state-of-the-art CD variants.
  • Item
    Gaze-Aware Visualisation: Design Considerations and Research Agenda
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Jianu, Radu; Silva, Nelson; Rodrigues, Nils; Blascheck, Tanja; Schreck, Tobias; Weiskopf, Daniel; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    Eye tracking provides a unique perspective on the inherently visual discourse between visualisation systems and their users, and has recently become sufficiently precise and affordable to be integrated as regular input into workstations and virtual or augmented reality headsets alike. As such, real-time eye tracking can now contribute significantly towards the development of gaze-aware visualisations that infer and monitor users' needs to actively support their activities. To facilitate such systems we make three contributions. First, we structure and discuss design considerations for gaze-aware visualisations along four axes: measurable data; inferable data; opportunities for support; and limiting factors to beware. Second, we distill visualisation research challenges that preclude such systems. Finally, we show via three usage scenarios how to apply these design considerations to imagine how existing systems can benefit from real-time eye tracking. We combined a structured literature analysis, a consideration of suitable places for eye-tracking integration in the typical visualisation ecosystem, and design space modelling. Eye tracking has significant potential to improve the interactive visual analysis of data across many visualisation domains. Our paper attempts to provide a comprehensive, general survey and conceptual discussion in this promising field, outlining the state-of-the-art and future research opportunities.
  • Item
    LEAD: Latent Realignment for Human Motion Diffusion
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Andreou, Nefeli; Wang, Xi; Fernández Abrevaya, Victoria; Cani, Marie-Paule; Chrysanthou, Yiorgos; Kalogeiton, Vicky; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    Our goal is to generate realistic human motion from natural language. Modern methods often face a trade-off between model expressiveness and text-to-motion (T2M) alignment. Some align text and motion latent spaces but sacrifice expressiveness; others rely on diffusion models producing impressive motions but lacking semantic meaning in their latent space. This may compromise realism, diversity and applicability. Here, we address this by combining latent diffusion with a realignment mechanism, producing a novel, semantically structured space that encodes the semantics of language. Leveraging this capability, we introduce the task of textual motion inversion to capture novel motion concepts from a few examples. For motion synthesis, we evaluate LEAD on HumanML3D and KIT-ML and show comparable performance to the state-of-the-art in terms of realism, diversity and textmotion consistency. Our qualitative analysis and user study reveal that our synthesised motions are sharper, more human-like and comply better with the text compared to modern methods. For motion textual inversion (MTI), our method demonstrates improvements in capturing out-of-distribution characteristics in comparison to traditional VAEs.
  • Item
    Self-Calibrating Fisheye Lens Aberrations for Novel View Synthesis
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Xiang, Jinhui; Li, Yuqi; Li, Jiabao; Zheng, Wenxing; Fu, Qiang; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    Neural rendering techniques, such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3D-GS), have led to significant advancements in scene reconstruction and novel view synthesis (NVS). These methods assume the use of an ideal pinhole model, which is free from lens distortion and optical aberrations. However, fisheye lenses introduce unavoidable aberrations due to their wide-angle design and complex manufacturing, leading to multi-view inconsistencies that compromise scene reconstruction quality. In this paper, we propose an end-to-end framework that integrates a standard 3D reconstruction pipeline with our lens aberration model to simultaneously calibrate lens aberrations and reconstruct 3D scenes. By modelling the real imaging process and jointly optimising both tasks, our framework eliminates the impact of aberration-induced inconsistencies on reconstruction. Additionally, we propose a curriculum learning approach that ensures stable optimisation and high-quality reconstruction results, even in the presence of multiple aberrations. To address the limitations of existing benchmarks, we introduce AbeRec, a dataset composed of scenes captured with lenses exhibiting severe aberrations. Extensive experiments on both existing public datasets and our proposed dataset demonstrate that our method not only significantly outperforms previous state-of-the-art methods on fisheye lenses with severe aberrations but also generalises well to scenes captured by non-fisheye lenses. Code and datasets are available at https://github.com/CPREgroup/Calibrating-Fisheye-Lens-Aberration-for-NVS.
  • Item
    Adaptive and Iterative Point Cloud Denoising with Score-Based Diffusion Model
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Wang, Zhaonan; Li, Manyi; Xin, Shiqing; Tu, Changhe; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    Point cloud denoising task aims to recover the clean point cloud from the scanned data coupled with different levels or patterns of noise. The recent state-of-the-art methods often train deep neural networks to update the point locations towards the clean point cloud, and empirically repeat the denoising process several times in order to obtain the denoised results. It is not clear how to efficiently arrange the iterative denoising processes to deal with different levels or patterns of noise. In this paper, we propose an adaptive and iterative point cloud denoising method based on the score-based diffusion model. For a given noisy point cloud, we first estimate the noise variation and determine an adaptive denoising schedule with appropriate step sizes, then invoke the trained network iteratively to update point clouds following the adaptive schedule. To facilitate this adaptive and iterative denoising process, we design the network architecture and a two-stage sampling strategy for the network training to enable feature fusion and gradient fusion for iterative denoising. Compared to the state-of-the-art point cloud denoising methods, our approach obtains clean and smooth denoised point clouds, while preserving the shape boundary and details better. Our results not only outperform the other methods both qualitatively and quantitatively, but also are preferable on the synthetic dataset with different patterns of noises, as well as the real-scanned dataset.
  • Item
    MARV: Multiview Augmented Reality Visualisation for Exploring Rich Material Data
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Gall, Alexander; Heim, Anja; Gröller, Eduard; Heinzl, Christoph; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    Rich material data is complex, large and heterogeneous, integrating primary and secondary non-destructive testing data for spatial, spatio-temporal, as well as high-dimensional data analyses. Currently, materials experts mainly rely on conventional desktop-based systems using 2D visualisation techniques, which render respective analyses a time-consuming and mentally demanding challenge. MARV is a novel immersive visual analytics system, which makes analyses of such data more effective and engaging in an augmented reality setting. For this purpose, MARV includes three newly designed visualisation techniques: MDD Glyphs with a Skewness Kurtosis Mapper, Temporal Evolution Tracker, and Chrono Bins, facilitating interactive exploration and comparison of multidimensional distributions of attribute data from multiple time steps. A qualitative evaluation conducted with materials experts in a real-world case study demonstrates the benefits of the proposed visualisation techniques. This evaluation revealed that combining spatial and abstract data in an immersive environment improves their analytical capabilities and facilitates the identification of patterns, anomalies, as well as changes over time.
  • Item
    Optimal Dimensionality Selection Using Hull Heatmaps for Single-Cell Analysis
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Jeong, Haejin; Jeong, Hyoung-oh; Lee, Semin; Jeong, Won-Ki; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    Single-cell RNA sequencing (scRNA-seq) has gained prominence as a valuable technique for examining cellular gene expression patterns at the individual cell level. In the analysis of scRNA-seq datasets, it is common practice to visualise a subset of principal components (PCs), obtained via principal component analysis (PCA), using dimensionality reduction techniques such as t-stochastic neighbour embedding (t-SNE). Determining the number of PCs (i.e. dimensionality) is a critical step that influences the outcome of single-cell analysis, and this process typically requires a labour-intensive manual assessment involving the inspection of numerous projection plots. To address this challenge, we present a visualisation system that assists analysts in efficiently determining the optimal dimensionality of scRNA-seq data. The proposed system employs two hull heatmaps, a cell type heatmap and a cluster heatmap, which offer comprehensive representations of target cells of multiple cell types across various dimensionalities through the utilisation of a convex hull-embedded colour map. The cell type heatmap shows overlaps between cell types, and the cluster heatmap compares cell clustering results. The proposed hull heatmaps effectively alleviate the labourious task of manually evaluating hundreds of projection plots for searching for the optimal dimensionality. Additionally, our system offers interactive visualisation of gene expression levels and an intuitive lasso selection tool, thereby enabling analysts to progressively refine the convex hulls on the hull heatmaps. We validated the usefulness of the proposed system through two quantitative evaluations and three case studies.
  • Item
    MPACT: Mesoscopic Profiling and Abstraction of Crowd Trajectories
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Lemonari, Marilena; Panayiotou, Andreas; Kyriakou, Theodoros; Pelechano, Nuria; Chrysanthou, Yiorgos; Aristidou, Andreas; Charalambous, Panayiotis; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    Simulating believable crowds for applications like movies or games is challenging due to the many components that comprise a realistic outcome. Users typically need to manually tune a large number of simulation parameters until they reach the desired results. We introduce MPACT, a framework that leverages image-based encoding to convert unlabelled crowd data into meaningful and controllable parameters for crowd generation. In essence, we train a parameter prediction network on a diverse set of synthetic data, which includes pairs of images and corresponding crowd profiles. The learned parameter space enables: (a) implicit crowd authoring and control, allowing users to define desired crowd scenarios using real-world trajectory data, and (b) crowd analysis, facilitating the identification of crowd behaviours in the input and the classification of unseen scenarios through operations within the latent space. We quantitatively and qualitatively evaluate our framework, comparing it against real-world data and selected baselines, while also conducting user studies with expert and novice users. Our experiments show that the generated crowds score high in terms of simulation believability, plausibility and crowd behaviour faithfulness.
  • Item
    Exploratory Analysis of Scientific Publications for University Governance
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Gràcia, A.; Padró, L.; Alarcon, E.; Vázquez, P.; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    Research-oriented universities often comprise numerous researchers of various types and possess complex research structures that encompass research groups, departments, laboratories, and research institutes. In this situation, understanding the university's strengths and areas of excellence requires careful examination. Additionally, individuals at different levels of governance (e.g., department heads, directors of research institutes, rectors) may seek to establish synergies among researchers to tackle issues such as international project applications or industry technology transfer. University officials and faculty members frequently require the expertise of specific research groups or individuals, but struggle to obtain this information beyond their personal networks. This limits their ability to locate necessary resources effectively. Fortunately, most institutions have databases containing publications that could provide valuable insights into areas of strength within the university. In this article, we present a visual analysis application capable of addressing these questions and assisting management in making informed decisions regarding governance measures such as creating new research institutes. Our system has been evaluated by domain experts, who found it highly beneficial and expressed interest in utilising it regularly.
  • Item
    Vector-Based Terrain Modelling
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Perche, Simon; Guérin, Eric; Galin, Eric; Peytavie, Adrien; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    Vector-based graphics offer numerous advantages over grid-based models, including resolution independence and ease of manipulation. Despite these benefits, their use in landscape modelling remains uncommon because of a lack of direct editing and interactive feedback, essential for matching the artist's vision. We introduce a new vector-based model for creating digital terrains based on computationally efficient primitives. We propose a method to convert grid-based digital elevation maps to this representation with a user-defined level of accuracy. Once vectorized, the terrain can be authored using interactive high-level skeleton-based tools adapted to the primitive representation, allowing local deformations that automatically adapt to underlying geomorphological structures and landforms of the terrain.
  • Item
    AI-ChartParser: A Method For Extracting Experimental Data From Curve Charts in Academic Papers
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Yang, Wenjin; He, Jie; Zhang, Xiaotong; Gong, Haiyan; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    In the fields of engineering and natural sciences, curve charts serve as indispensable visualization tools for scientific research, product development and engineering design, as they encapsulate crucial data necessary for comprehensive analysis. Existing methodologies for data extraction from line charts predominantly depend on single-task models, which frequently exhibit limitations in efficiency and generalization. To overcome these challenges, we propose AI-ChartParser, an end-to-end deep learning model that employs multi-task learning to concurrently execute chart element detection, pivot point detection and curve detection. This approach effectively and efficiently parses diverse chart formats within a cohesive framework. Furthermore, we introduce an Interval-Mean Space-Numerical Mapping algorithm designed to address challenges in data range extraction, thereby significantly minimizing conversion errors. We have incorporated all the methodologies discussed in this paper to develop a comprehensive data extraction tool, facilitating the automatic conversion of line charts into tabular data. Our model exhibits exceptional performance on complex real-world datasets, achieving state-of-the-art accuracy and speed across all three tasks. To facilitate further research, the source codes and pre-trained models are released at https://github.com/ywking/ChartParser.git.
  • Item
    Comparative Study of Four Visualization Techniques and Positional Variations for Displaying Exercise Data on Smartwatches
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Liu, Yu; Xia, Zhouxuan; Du, Jinyuan; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    As smartwatches become increasingly prevalent, their built-in sensors provide a rich source for gathering various personal data, including physical activity and health metrics. We found that different brands and models use various visualization techniques. However, the effectiveness of these visualizations within the limited display space of smartwatches remains unclear. Therefore, this paper compares four popular visualizations—bar charts, radial bar charts, donut charts and multi-donut charts—used for displaying activity data on smartwatches. The evaluation focuses on their performance in three common user tasks: counting completed goals, estimating completion percentage and estimating exercise duration. Additionally, the study investigates the impact of the positioning of the target data item within these visualizations on user performance. Our results indicate that bar charts are superior in terms of task completion time across all tasks. Radial bar charts and multi-donut charts are most effective in helping users perceive the completion ratio (percentage) of each activity and understand the time taken for each activity metric (in minutes). Interestingly, we found that the positioning of data items within the visualizations significantly influences user performance in many cases. Furthermore, it was noted that the visualizations users favoured the most were generally those that enabled them to achieve the highest accuracy in task completion. These insights provide valuable guidelines for future designs in visualizing exercise data on smartwatches. Supplementary material is available at https://osf.io/5u2ph/.
  • Item
    3DGM: Deformable and Texturable 3D Gaussian Model via Level-of-Detail Proxy
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Wang, Xiangzhi Eric; Sin, Zackary P. T.; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    3D Gaussian Splatting has markedly impacted neural rendering by achieving impressive fidelity and performance. Despite this achievement, it is not readily applicable to developing interactive applications. Real-time applications like XR apps and games require functions such as animation, UV mapping and level of detail (LOD) simultaneously manipulated through a 3D model. To address this need, we propose a modelling strategy analogous to typical 3D models, which we call 3D Gaussian Model (3DGM). 3DGM relies on attaching 3D Gaussians on the triangles of a mesh proxy, and the key idea is to bind sheared 3D Gaussians in texture space and re-projecting them back to world space through implicit shell mapping; this design naturally enables deformation and UV mapping via the proxy. Further, to optimize speed and fidelity based on different viewing distances, each triangle can be tessellated to change the number of involved 3D Gaussians adaptively. Application-wise, we will show that our proxy-based 3DGM is capable of enabling novel deformation without animated training data, texture transferring via UV mapping of the 3D Gaussians, and LOD rendering. The results indicate that our model achieves better fidelity for deformation and better optimization of fidelity and performance given different viewing distances. Further, we believe the results indicate the potential of our work for enabling interactive applications for 3D Gaussian Splatting.
  • Item
    Hi3DFace: High-Realistic 3D Face Reconstruction From a Single Occluded Image
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Huang, Dongjin; Shi, Yongsheng; Qu, Jiantao; Liu, Jinhua; Tang, Wen; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    We propose Hi3DFace, a novel framework for simultaneous de-occlusion and high-fidelity 3D face reconstruction. To address real-world occlusions, we construct a diverse facial dataset by simulating common obstructions and present TMANet, a transformer-based multi-scale attention network that effectively removes occlusions and restores clean face images. For the 3D face reconstruction stage, we propose a coarse-medium-fine self-supervised scheme. In the coarse reconstruction pipeline, we adopt a face regression network to predict 3DMM coefficients for generating a smooth 3D face. In the medium-scale reconstruction pipeline, we propose a novel depth displacement network, DDFTNet, to remove noise and restore rich details to the smooth 3D geometry. In the fine-scale reconstruction pipeline, we design a GCN (graph convolutional network) refiner to enhance the fidelity of 3D textures. Additionally, a light-aware network (LightNet) is proposed to distil lighting parameters, ensuring illumination consistency between reconstructed 3D faces and input images. Extensive experimental results demonstrate that the proposed Hi3DFace significantly outperforms state-of-the-art reconstruction methods on four public datasets, and five constructed occlusion-type datasets. Hi3DFace achieves robustness and effectiveness in removing occlusions and reconstructing 3D faces from real-world occluded facial images.
  • Item
    Real-Time Neural Denoising for Volume Rendering Using Dual-Input Feature Fusion Network
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Xu, Chunxiao; Xu, Xinran; Zhang, Jiatian; Liu, Yufei; Cao, Yiheng; Zhao, Lingxiao; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    Direct volume rendering (DVR) is a widely used technique in the visualization of volumetric data. As an important DVR technique, volumetric path tracing (VPT) simulates light transport to produce realistic rendering results, which provides enhanced perception and understanding for users, especially in the field of medical imaging. VPT, based on the Monte Carlo (MC) method, typically requires a large number of samples to generate noise-free results. However, in real-time applications, only a limited number of samples per pixel is allowed and significant noise can be created. This paper introduces a novel neural denoising approach that utilizes a new feature fusion method for VPT. Our method uses a feature decomposition technique that separates radiance into components according to noise levels. Our new decomposition technique mitigates biases found in contemporary decoupling denoising algorithms and shows better utilization of samples. A lightweight dual-input network is designed to correlate these components with noise-free ground truth. Additionally, for denoising sequences of video frames, we develop a learning-based temporal method that calculates temporal weight maps, blending reprojected results of previous frames with spatially denoised current frames. Comparative results demonstrate that our network performs faster inference than existing methods and can produce denoised output of higher quality in real time.
  • Item
    EyeExpand: A Low-Burden and Accurate 3D Object Selection Method With Gaze and Raycasting
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Xu, X.; He, Y.; Ge, Y.; Zheng, Z.; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    Raycasting is a widely used object selection technique in virtual reality. However, in dense scenes, it becomes difficult for users to accurately select targets when objects are partially or fully occluded. While recent studies have introduced progressive refinement techniques based on raycasting to address these limitations, they still suffer from challenges such as high interaction complexity and difficulties in preserving the relative spatial relationships between objects within the scene. In this paper, we present a simple and efficient 3D progressive refinement technique for object selection in dense scenes while maintaining the relative spatial positions of selected objects. We compare our technique with other progressive refinement techniques and evaluate their performance and user experience in a target selection task within dense VR environments. The results show that in low- and medium-density scenarios, our technique outperforms existing progressive refinement techniques in terms of selection time. In high-density scenarios, the proposed technique significantly reduces physical effort while maintaining comparable selection times, thereby offering an improved overall interactive experience.
  • Item
    Theoretical Model Validation of the Multisensory Role on Subjective Realism, Presence and Involvement in Immersive Virtual Reality
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Gonçalves, Guilherme; Peixoto, Bruno; Melo, Miguel; Bessa, Maximino; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    With the consistent adoption of iVR and growing research on the topic, it becomes fundamental to understand how the perception of Realism plays a role in the potential of iVR. This work puts forwards a hypothesis-driven theoretical model of how the perception of each multisensory stimulus (Visual, Audio, Haptic and Scent) is related to the perception of Realism of the whole experience (Subjective Realism) and, in turn, how this Subjective Realism is related to Involvement and Presence. The model was validated using a sample of 216 subjects in a multisensory iVR experience. The results indicated a good model fit and provided evidence on how the perception of Realism of Visual, Audio and Scent individually is linked to Subjective Realism. Furthermore, the results demonstrate strong evidence that Subjective Realism is strongly associated with Involvement and Presence. These results put forwards a validated questionnaire for the perception of Realism of different aspects of the virtual experience and a robust theoretical model on the interconnections of these constructs. We provide empirical evidence that can be used to optimise iVR systems for Presence, Involvement and Subjective Realism, thereby enhancing the effectiveness of iVR experiences and opening new research avenues.
  • Item
    Real-Time and Controllable Reactive Motion Synthesis via Intention Guidance
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Zhang, Xiaotang; Chang, Ziyi; Men, Qianhui; Shum, Hubert P. H.; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    We propose a real-time method for reactive motion synthesis based on the known trajectory of an input character, predicting instant reactions using only historical, user-controlled motions. Our method handles the uncertainty of future movements by introducing an intention predictor, which forecasts key joint intentions to make pose prediction more deterministic from the historical interaction. The intention is later encoded into the latent space of its reactive motion, matched with a codebook that represents mappings between input and output. It samples from the categorical distribution for pose generation and strengthens model robustness through adversarial training. Unlike previous offline approaches, the system can recursively generate intentions and reactive motions using feedback from earlier steps, enabling real-time, long-term realistic interactive synthesis. Both quantitative and qualitative experiments show our approach outperforms other matching-based motion synthesis approaches, delivering superior stability and generalisability. In our method, the user can also actively influence the outcome by controlling the moving directions, creating a personalised interaction path that deviates from predefined trajectories.
  • Item
    Herds From Video: Learning a Microscopic Herd Model From Macroscopic Motion Data
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Gong, Xianjin; Gain, James; Rohmer, Damien; Lyonnet, Sixtine; Pettré, Julien; Cani, Marie-Paule; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    We present a method for animating herds that automatically tunes a microscopic herd model based on a short video clip of real animals. Our method handles videos with dense herds, where individual animal motion cannot be separated out. Our contribution is a novel framework for extracting macroscopic herd behaviour from such video clips, and then deriving the microscopic agent parameters that best match this behaviour. To support this learning process, we extend standard agent models to provide a separation between leaders and followers, better match the occlusion and field-of-view limitations of real animals, support differentiable parameter optimization and improve authoring control. We validate the method by showing that once optimized, the social force and perception parameters of the resulting herd model are accurate enough to predict subsequent frames in the video, even for macroscopic properties not directly incorporated in the optimization process. Furthermore, the extracted herding characteristics can be applied to any terrain with a palette and region-painting approach that generalizes to different herd sizes and leader trajectories. This enables the authoring of herd animations in new environments while preserving learned behaviour.
  • Item
    GeoDEN: A Visual Exploration Tool for Analyzing the Geographic Spread of Dengue Serotypes
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Marler, Aidan; Roell, Yannik; Knoblauch, Steffen; Messina Jane, P.; Jaenisch, Thomas; Karimzadeh, Mohammad; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    Static maps and animations remain popular in spatial epidemiology of dengue, limiting the analytical depth and scope of visualizations. Over half of the global population live in dengue endemic regions. Understanding the spatiotemporal dynamics of the four closely related dengue serotypes, and their immunological interactions, remains a challenge at a global scale. To facilitate this understanding, we worked with dengue epidemiologists in a user-centred design framework to create GeoDEN, an exploratory visualization tool that empowers experts to investigate spatiotemporal patterns in dengue serotype reports. The tool has several linked visualizations and filtering mechanisms, enabling analysis at a range of spatial and temporal scales. To identify successes and failures, we present both insight-based and value-driven evaluations. Our domain experts found GeoDEN valuable, verifying existing hypotheses and uncovering novel insights that warrant further investigation by the epidemiology community. The developed visual exploration approach can be adapted for exploring other epidemiology and disease incident datasets.
  • Item
    Self-Supervised Image Harmonization via Region-Aware Harmony Classification
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Tian, Chenyang; Wang, Xinbo; Zhang, Qing; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    Image harmonization is a widely used technique in image composition, which aims to adjust the appearance of the composited foreground object according to the style of the background image so that the resulting composited image is visually natural and appears to be photographed. Previous methods are mostly trained in a fully supervised manner, while demonstrating promising results, they do not generalize well to complex unseen cases involving significant style and semantic difference between the composited foreground object and the background image. In this paper, we present a self-supervised image harmonization framework that enables superior performance on complex cases. To do so, we first synthesize a large amount of data with wide diversity for training. We then develop an attentive harmonization module to adaptively adjust the foreground appearance by querying relevant background features. To allow more effective image harmonization, we develop a region-aware harmony classifier to explicitly judge whether an image is harmonious or not. Experiments on several datasets show that our method performs favourably against previous methods. Our code will be made publicly available.
  • Item
    Correction to 'Antarstick: Extracting Snow Height From Time-Lapse Photography'
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Lang, M.; Mráz, R.; Trtík, M.; Stoppel, S.; Byška, J.; Kozlíková, B.; Wimmer, Michael; Alliez, Pierre; Westermann, Rüdiger
    Correction note to the article "Antarstick: Extracting Snow Height From Time-Lapse Photography".