44-Issue 2

Permanent URI for this collection

Face-First for Digital Avatars
Neural Face Skinning for Mesh-agnostic Facial Expression Cloning
Sihun Cha, Serin Yoon, Kwanggyoon Seo, and Junyong Noh
NePHIM: A Neural Physics-Based Head-Hand Interaction Model
Nicolas Wagner, Ulrich Schwanecke, and Mario Botsch
"Wild West" of Evaluating Speech-Driven 3D Facial Animation Synthesis: A Benchmark Study
Kazi Injamamul Haque, Alkiviadis Pavlou, and Zerrin Yumak
Drawn to Detail: Sketch-Based Modeling and Non-Photorealistic Rendering
VRSurf: Surface Creation from Sparse, Unoriented 3D Strokes
Anandhu Sureshkumar, Amal Dev Parakkat, Georges-Pierre Bonneau, Stefanie Hahmann, and Marie-Paule Cani
Image Vectorization via Gradient Reconstruction
Souymodip Chakraborty, Vineet Batra, Ankit Phogat, Vishwas Jain, Jaswant Singh Ranawat, Sumit Dhingra, Kevin Wampler, Matthew Fisher, and Michal Lukác
Screentone-Preserved Manga Retargeting
Minshan Xie, Menghan Xia, Chengze Li, Xueting Liu, and Tien-Tsin Wong
2D Neural Fields with Learned Discontinuities
Chenxi Liu, Siqi Wang, Matthew Fisher, Deepali Aneja, and Alec Jacobson
Shape It Til You Make It: Programs for 3D Synthesis
Text-Guided Interactive Scene Synthesis with Scene Prior Guidance
Shaoheng Fang, Haitao Yang, Raymond Mooney, and Qixing Huang
FlairGPT: Repurposing LLMs for Interior Designs
Gabrielle Littlefair, Niladri Shekhar Dutt, and Niloy J. Mitra
Approximating Procedural Models of 3D Shapes with Neural Networks
Ishtiaque Hossain, I-Chao Shen, and Oliver van Kaick
Neural Geometry Processing via Spherical Neural Surfaces
Romy Williamson and Niloy J. Mitra
Eclipsing the Ordinary in Visualization
Physically Based Real-Time Rendering of Eclipses
Simon Schneegans, Jonas Gilg, Volker Ahlers, Gabriel Zachmann, and Andreas Gerndt
Fast Sphere Tracing of Procedural Volumetric Noise for very Large and Detailed Scenes
Mathéo Moinet and Fabrice Neyret
View-Dependent Visibility Optimization for Monte Carlo Volume Visualization
Nathan Lerzer and Carsten Dachsbacher
VortexTransformer: End-to-End Objective Vortex Detection in 2D Unsteady Flow Using Transformers
Xingdi Zhang, Peter Rautek, and Markus Hadwiger
Learning Image Fractals Using Chaotic Differentiable Point Splatting
Adarsh Djeacoumar, Felix Mujkanovic, Hans-Peter Seidel, and Thomas Leimkühler
Splat-tacular Radiance Fields
4-LEGS: 4D Language Embedded Gaussian Splatting
Gal Fiebelman, Tamir Cohen, Ayellet Morgenstern, Peter Hedman, and Hadar Averbuch-Elor
NoPe-NeRF++: Local-to-Global Optimization of NeRF with No Pose Prior
Dongbo Shi, Shen Cao, Bojian Wu, Jinhui Guo, Lubin Fan, Renjie Chen, Ligang Liu, and Jieping Ye
Efficient Perspective-Correct 3D Gaussian Splatting Using Hybrid Transparency
Florian Hahlbohm, Fabian Friederichs, Tim Weyrich, Linus Franke, Moritz Kappel, Susana Castillo, Marc Stamminger, Martin Eisemann, and Marcus Magnor
Does 3D Gaussian Splatting Need Accurate Volumetric Rendering?
Adam Celarek, Georgios Kopanas, George Drettakis, Michael Wimmer, and Bernhard Kerbl
Learning Fast 3D Gaussian Splatting Rendering using Continuous Level of Detail
Nicholas Milef, Dario Seyb, Todd Keeler, Thu Nguyen-Phuoc, Aljaz Bozic, Sushant Kondguli, and Carl Marshall
Fix it in Post: Image and Video Synthesis and Analysis
Bracket Diffusion: HDR Image Generation by Consistent LDR Denoising
Mojtaba Bemana, Thomas Leimkühler, Karol Myszkowski, Hans-Peter Seidel, and Tobias Ritschel
SAHLUT: Efficient Image Enhancement using Spatial-Aware High-Light Compensation Look-up Tables
Xin Chen, Linge Li, Linhong Mu, Yan Chen, and Jingwei Guan
Infusion: Internal Diffusion for Inpainting of Dynamic Textures and Complex Motion
Nicolas Cherel, Andrés Almansa, Yann Gousseau, and Alasdair Newson
D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video
Moritz Kappel, Florian Hahlbohm, Timon Scholz, Susana Castillo, Christian Theobalt, Martin Eisemann, Vladislav Golyanik, and Marcus Magnor
A Multimodal Personality Prediction Framework based on Adaptive Graph Transformer Network and Multi-task Learning
Rongquan Wang, Xile Zhao, Xianyu Xu, and Yang Hao
The Shape of Rendering
Differentiable Rendering based Part-Aware Occlusion Proxy Generation
Zhipeng Tan, Yongxiang Zhang, Fei Xia, and Fei Ling
Implicit UVs: Real-time Semi-global Parameterization of Implicit Surfaces
Baptiste Genest, Pierre Gueth, Jérémy Levallois, and Stephanie Wang
Lipschitz Pruning: Hierarchical Simplification of Primitive-Based SDFs
Wilhem Barbier, Mathieu Sanchez, Axel Paris, Élie Michel, Thibaud Lambert, Tamy Boubekeur, Mathias Paulin, and Theo Thonat
HPRO: Direct Visibility of Point Clouds for Optimization
Sagi Katz and Ayellet Tal
Rigged for Success: Character Animation and Retargeting
How to Train Your Dragon: Automatic Diffusion-Based Rigging for Characters with Diverse Topologies
Zeqi Gu, Difan Liu, Timothy Langlois, Matthew Fisher, and Abe Davis
ASMR: Adaptive Skeleton-Mesh Rigging and Skinning via 2D Generative Prior
Seokhyeon Hong, Soojin Choi, Chaelin Kim, Sihun Cha, and Junyong Noh
ReConForM: Real-time Contact-aware Motion Retargeting for more Diverse Character Morphologies
Théo Cheynel, Thomas Rossi, Baptiste Bellot-Gurlet, Damien Rohmer, and Marie-Paule Cani
InterFaceRays: Interaction-Oriented Furniture Surface Representation for Human Pose Retargeting
Taeil Jin, Yewon Lee, and Sung-Hee Lee
Lighting the Way: Scattering and Transport in Rendering
Linearly Transformed Spherical Distributions for Interactive Single Scattering with Area Lights
Aakash Kt, Ishaan Shah, and P. J. Narayanan
Adaptive Multi-view Radiance Caching for Heterogeneous Participating Media
Pascal Stadlbauer, Wolfgang Tatzgern, Joerg H. Mueller, Martin Winter, Robert Stojanovic, Alexander Weinrauch, and Markus Steinberger
Many-Light Rendering Using ReSTIR-Sampled Shadow Maps
Song Zhang, Daqi Lin, Chris Wyman, and Cem Yuksel
Neural Two-Level Monte Carlo Real-Time Rendering
Mikhail Dereviannykh, Dmitrii Klepikov, Johannes Hanika, and Carsten Dachsbacher
Inverse Simulation of Radiative Thermal Transport
Christian Freude, Lukas Lipp, Matthias Zezulka, Florian Rist, Michael Wimmer, and David Hahn
Real-Time Rendering: Fast, Furious, and Accurate
Real-time Procedural Resurfacing Using GPU Mesh Shader
Josué Raad, Arthur Delon, Mickaël Ribardière, Daniel Meneveaux, and Guillaume Gilet
SOBB: Skewed Oriented Bounding Boxes for Ray Tracing
Martin Kácerik and Jirí Bittner
Axis-Normalized Ray-Box Intersection
Fabian Friederichs, Carsten Benthin, Steve Grogorick, Elmar Eisemann, Marcus Magnor, and Martin Eisemann
Real-Time Rendering Framework for Holography
Sascha Fricke, Susana Castillo, Martin Eisemann, and Marcus Magnor
Simulating Complex Systems: Turbulent, Crowded, and Shattered
CEDRL: Simulating Diverse Crowds with Example-Driven Deep Reinforcement Learning
Andreas Panayiotou, Andreas Aristidou, and Panayiotis Charalambous
A Unified Multi-scale Method for Simulating Immersed Bubbles
Joel Wretborn, Alexey Stomakhin, and Christopher Batty
A Semi-Implicit SPH Method for Compressible and Incompressible Flows with Improved Convergence
Xiaowei He, Shusen Liu, Yuzhong Guo, Jian Shi, and Ying Qiao
Eigenvalue Blending for Projected Newton
Yuan-Yuan Cheng, Ligang Liu, and Xiao-Ming Fu
Shady Business: Materials, Textures, and Lighting
From Words to Wood: Text-to-Procedurally Generated Wood Materials
Mohcen Hafidi and Alexander Wilkie
Material Transforms from Disentangled NeRF Representations
Ivan Lopes, Jean-François Lalonde, and Raoul de Charette
Deformed Tiling and Blending: Application to the Correction of Distortions Implied by Texture Mapping
Quentin Wendling, Joris Ravaglia, and Basile Sauvage
FastAtlas: Real-Time Compact Atlases for Texture Space Shading
Nicholas Vining, Zander Majercik, Floria Gu, Towaki Takikawa, Ty Trusty, Paul Lalonde, Morgan McGuire, and Alla Sheffer
All-frequency Full-body Human Image Relighting
Daichi Tajima, Yoshihiro Kanamori, and Yuki Endo
Built for Reality: Analyzing, Crafting and Fabricating Structures
S-ACORD: Spectral Analysis of COral Reef Deformation
Naama Alon-Borissiouk, Matan Yuval, Tali Treibitz, and Mirela Ben-Chen
Optimizing Free-Form Grid Shells with Reclaimed Elements under Inventory Constraints
Andrea Favilli, Francesco Laccone, Paolo Cignoni, Luigi Malomo, and Daniela Giorgi
Bringing Motion to Life: Motion Reconstruction and Control
Shape-Conditioned Human Motion Diffusion Model with Mesh Representation
Kebing Xue, Hyewon Seo, Cédric Bobenrieth, and Guoliang Luo
Versatile Physics-based Character Control with Hybrid Latent Representation
Jinseok Bae, Jungdam Won, Donggeun Lim, Inwoo Hwang, and Young Min Kim
Generative Motion Infilling from Imprecisely Timed Keyframes
Purvi Goel, Haotian Zhang, C. Karen Liu, and Kayvon Fatahalian
DragPoser: Motion Reconstruction from Variable Sparse Tracking Signals via Latent Space Optimization
Jose Luis Ponton, Eduard Pujol, Andreas Aristidou, Carlos Andujar, and Nuria Pelechano
Multi-Modal Instrument Performances (MMIP): A Musical Database
Theodoros Kyriakou, Andreas Aristidou, and Panayiotis Charalambous
Geometrically, Parametrically Speaking
Mesh Compression with Quantized Neural Displacement Fields
Sai Karthikey Pentapati, Gregoire Phillips, and Alan C. Bovik
Preconditioned Single-step Transforms for Non-rigid ICP
Yucheol Jung, Hyomin Kim, Hyejeong Yoon, and Seungyong Lee
Isosurface Extraction for Signed Distance Functions using Power Diagrams
Maximilian Kohlbrenner and Marc Alexa
Learning Metric Fields for Fast Low-Distortion Mesh Parameterizations
Guy Fargion and Ofir Weber
Towards Scaling-Invariant Projections for Data Visualization
Joel Dierkes, Daniel Stelter, Christian Rössl, and Holger Theisel
The Artful Edit: Stylization and Editing for Images and Video
Neural Film Grain Rendering
Gwilherm Lesné, Yann Gousseau, Saïd Ladjal, and Alasdair Newson
StyleBlend: Enhancing Style-Specific Content Creation in Text-to-Image Diffusion Models
Zichong Chen, Shijin Wang, and Yang Zhou
Synchronized Multi-Frame Diffusion for Temporally Consistent Video Stylization
Minshan Xie, Hanyuan Liu, Chengze Li, and Tien-Tsin Wong
REED-VAE: RE-Encode Decode Training for Iterative Image Editing with Diffusion Models
Gal Almog, Ariel Shamir, and Ohad Fried
Differential Diffusion: Giving Each Pixel Its Strength
Eran Levin and Ohad Fried
Soft Bodies, Strands, and Silks
Rest Shape Optimization for Sag-Free Discrete Elastic Rods
Tetsuya Takahashi and Christopher Batty
BlendSim: Simulation on Parametric Blendshapes using Spacetime Projective Dynamics
Yuhan Wu and Nobuyuki Umetani
A Unified Discrete Collision Framework for Triangle Primitives
Tomoyo Kikuchi and Takashi Kanai
Cloth Animation with Time-dependent Persistent Wrinkles
Deshan Gong, Yin Yang, Tianjia Shao, and He Wang
Corotational Hinge-based Thin Plates/Shells
Qixin Liang

BibTeX (44-Issue 2)
                
@article{
10.1111:cgf.70007,
journal = {Computer Graphics Forum}, title = {{
All-frequency Full-body Human Image Relighting}},
author = {
Tajima, Daichi
and
Kanamori, Yoshihiro
and
Endo, Yuki
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70007}
}
                
@article{
10.1111:cgf.70008,
journal = {Computer Graphics Forum}, title = {{
Material Transforms from Disentangled NeRF Representations}},
author = {
Lopes, Ivan
and
Lalonde, Jean-François
and
Charette, Raoul de
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70008}
}
                
@article{
10.1111:cgf.70009,
journal = {Computer Graphics Forum}, title = {{
Neural Face Skinning for Mesh-agnostic Facial Expression Cloning}},
author = {
Cha, Sihun
and
Yoon, Serin
and
Seo, Kwanggyoon
and
Noh, Junyong
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70009}
}
                
@article{
10.1111:cgf.70010,
journal = {Computer Graphics Forum}, title = {{
FastAtlas: Real-Time Compact Atlases for Texture Space Shading}},
author = {
Vining, Nicholas
and
Majercik, Zander
and
Gu, Floria
and
Takikawa, Towaki
and
Trusty, Ty
and
Lalonde, Paul
and
McGuire, Morgan
and
Sheffer, Alla
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70010}
}
                
@article{
10.1111:cgf.70011,
journal = {Computer Graphics Forum}, title = {{
Deformed Tiling and Blending: Application to the Correction of Distortions Implied by Texture Mapping}},
author = {
Wendling, Quentin
and
Ravaglia, Joris
and
Sauvage, Basile
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70011}
}
                
@article{
10.1111:cgf.70012,
journal = {Computer Graphics Forum}, title = {{
NoPe-NeRF++: Local-to-Global Optimization of NeRF with No Pose Prior}},
author = {
Shi, Dongbo
and
Cao, Shen
and
Wu, Bojian
and
Guo, Jinhui
and
Fan, Lubin
and
Chen, Renjie
and
Liu, Ligang
and
Ye, Jieping
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70012}
}
                
@article{
10.1111:cgf.70013,
journal = {Computer Graphics Forum}, title = {{
SHLUT: Efficient Image Enhancement using Spatial-Aware High-Light Compensation Look-up Tables}},
author = {
Chen, Xin
and
Li, Linge
and
Mu, Linhong
and
Chen, Yan
and
Guan, Jingwei
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70013}
}
                
@article{
10.1111:cgf.70014,
journal = {Computer Graphics Forum}, title = {{
Efficient Perspective-Correct 3D Gaussian Splatting Using Hybrid Transparency}},
author = {
Hahlbohm, Florian
and
Friederichs, Fabian
and
Weyrich, Tim
and
Franke, Linus
and
Kappel, Moritz
and
Castillo, Susana
and
Stamminger, Marc
and
Eisemann, Martin
and
Magnor, Marcus
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70014}
}
                
@article{
10.1111:cgf.70015,
journal = {Computer Graphics Forum}, title = {{
CEDRL: Simulating Diverse Crowds with Example-Driven Deep Reinforcement Learning}},
author = {
Panayiotou, Andreas
and
Aristidou, Andreas
and
Charalambous, Panayiotis
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70015}
}
                
@article{
10.1111:cgf.70016,
journal = {Computer Graphics Forum}, title = {{
How to Train Your Dragon: Automatic Diffusion-Based Rigging for Characters with Diverse Topologies}},
author = {
Gu, Zeqi
and
Liu, Difan
and
Langlois, Timothy
and
Fisher, Matthew
and
Davis, Abe
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70016}
}
                
@article{
10.1111:cgf.70017,
journal = {Computer Graphics Forum}, title = {{
Physically Based Real-Time Rendering of Eclipses}},
author = {
Schneegans, Simon
and
Gilg, Jonas
and
Ahlers, Volker
and
Zachmann, Gabriel
and
Gerndt, Andreas
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70017}
}
                
@article{
10.1111:cgf.70018,
journal = {Computer Graphics Forum}, title = {{
Versatile Physics-based Character Control with Hybrid Latent Representation}},
author = {
Bae, Jinseok
and
Won, Jungdam
and
Lim, Donggeun
and
Hwang, Inwoo
and
Kim, Young Min
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70018}
}
                
@article{
10.1111:cgf.70019,
journal = {Computer Graphics Forum}, title = {{
Rest Shape Optimization for Sag-Free Discrete Elastic Rods}},
author = {
Takahashi, Tetsuya
and
Batty, Christopher
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70019}
}
                
@article{
10.1111:cgf.70020,
journal = {Computer Graphics Forum}, title = {{
REED-VAE: RE-Encode Decode Training for Iterative Image Editing with Diffusion Models}},
author = {
Almog, Gal
and
Shamir, Ariel
and
Fried, Ohad
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70020}
}
                
@article{
10.1111:cgf.70021,
journal = {Computer Graphics Forum}, title = {{
Neural Geometry Processing via Spherical Neural Surfaces}},
author = {
Williamson, Romy
and
Mitra, Niloy J.
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70021}
}
                
@article{
10.1111:cgf.70022,
journal = {Computer Graphics Forum}, title = {{
Corotational Hinge-based Thin Plates/Shells}},
author = {
Liang, Qixin
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70022}
}
                
@article{
10.1111:cgf.70023,
journal = {Computer Graphics Forum}, title = {{
2D Neural Fields with Learned Discontinuities}},
author = {
Liu, Chenxi
and
Wang, Siqi
and
Fisher, Matthew
and
Aneja, Deepali
and
Jacobson, Alec
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70023}
}
                
@article{
10.1111:cgf.70024,
journal = {Computer Graphics Forum}, title = {{
Approximating Procedural Models of 3D Shapes with Neural Networks}},
author = {
Hossain, Ishtiaque
and
Shen, I-Chao
and
Kaick, Oliver van
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70024}
}
                
@article{
10.1111:cgf.70025,
journal = {Computer Graphics Forum}, title = {{
Multi-Modal Instrument Performances (MMIP): A Musical Database}},
author = {
Kyriakou, Theodoros
and
Aristidou, Andreas
and
Charalambous, Panayiotis
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70025}
}
                
@article{
10.1111:cgf.70026,
journal = {Computer Graphics Forum}, title = {{
DragPoser: Motion Reconstruction from Variable Sparse Tracking Signals via Latent Space Optimization}},
author = {
Ponton, Jose Luis
and
Pujol, Eduard
and
Aristidou, Andreas
and
Andujar, Carlos
and
Pelechano, Nuria
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70026}
}
                
@article{
10.1111:cgf.70027,
journal = {Computer Graphics Forum}, title = {{
Eigenvalue Blending for Projected Newton}},
author = {
Cheng, Yuan-Yuan
and
Liu, Ligang
and
Fu, Xiao-Ming
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70027}
}
                
@article{
10.1111:cgf.70028,
journal = {Computer Graphics Forum}, title = {{
ReConForM: Real-time Contact-aware Motion Retargeting for more Diverse Character Morphologies}},
author = {
Cheynel, Théo
and
Rossi, Thomas
and
Bellot-Gurlet, Baptiste
and
Rohmer, Damien
and
Cani, Marie-Paule
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70028}
}
                
@article{
10.1111:cgf.70029,
journal = {Computer Graphics Forum}, title = {{
A Unified Discrete Collision Framework for Triangle Primitives}},
author = {
Kikuchi, Tomoyo
and
Kanai, Takashi
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70029}
}
                
@article{
10.1111:cgf.70030,
journal = {Computer Graphics Forum}, title = {{
A Multimodal Personality Prediction Framework based on Adaptive Graph Transformer Network and Multi-task Learning}},
author = {
Wang, Rongquan
and
Zhao, Xile
and
Xu, Xianyu
and
Hao, Yang
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70030}
}
                
@article{
10.1111:cgf.70031,
journal = {Computer Graphics Forum}, title = {{
Cloth Animation with Time-dependent Persistent Wrinkles}},
author = {
Gong, Deshan
and
Yang, Yin
and
Shao, Tianjia
and
Wang, He
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70031}
}
                
@article{
10.1111:cgf.70032,
journal = {Computer Graphics Forum}, title = {{
Does 3D Gaussian Splatting Need Accurate Volumetric Rendering?}},
author = {
Celarek, Adam
and
Kopanas, Georgios
and
Drettakis, George
and
Wimmer, Michael
and
Kerbl, Bernhard
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70032}
}
                
@article{
10.1111:cgf.70033,
journal = {Computer Graphics Forum}, title = {{
A Unified Multi-scale Method for Simulating Immersed Bubbles}},
author = {
Wretborn, Joel
and
Stomakhin, Alexey
and
Batty, Christopher
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70033}
}
                
@article{
10.1111:cgf.70034,
journal = {Computer Graphics Forum}, title = {{
StyleBlend: Enhancing Style-Specific Content Creation in Text-to-Image Diffusion Models}},
author = {
Chen, Zichong
and
Wang, Shijin
and
Zhou, Yang
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70034}
}
                
@article{
10.1111:cgf.70035,
journal = {Computer Graphics Forum}, title = {{
Preconditioned Single-step Transforms for Non-rigid ICP}},
author = {
Jung, Yucheol
and
Kim, Hyomin
and
Yoon, Hyejeong
and
Lee, Seungyong
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70035}
}
                
@article{
10.1111:cgf.70036,
journal = {Computer Graphics Forum}, title = {{
FlairGPT: Repurposing LLMs for Interior Designs}},
author = {
Littlefair, Gabrielle
and
Dutt, Niladri Shekhar
and
Mitra, Niloy J.
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70036}
}
                
@article{
10.1111:cgf.70037,
journal = {Computer Graphics Forum}, title = {{
Isosurface Extraction for Signed Distance Functions using Power Diagrams}},
author = {
Kohlbrenner, Maximilian
and
Alexa, Marc
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70037}
}
                
@article{
10.1111:cgf.70038,
journal = {Computer Graphics Forum}, title = {{
D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video}},
author = {
Kappel, Moritz
and
Hahlbohm, Florian
and
Scholz, Timon
and
Castillo, Susana
and
Theobalt, Christian
and
Eisemann, Martin
and
Golyanik, Vladislav
and
Magnor, Marcus
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70038}
}
                
@article{
10.1111:cgf.70039,
journal = {Computer Graphics Forum}, title = {{
Text-Guided Interactive Scene Synthesis with Scene Prior Guidance}},
author = {
Fang, Shaoheng
and
Yang, Haitao
and
Mooney, Raymond
and
Huang, Qixing
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70039}
}
                
@article{
10.1111:cgf.70040,
journal = {Computer Graphics Forum}, title = {{
Differential Diffusion: Giving Each Pixel Its Strength}},
author = {
Levin, Eran
and
Fried, Ohad
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70040}
}
                
@article{
10.1111:cgf.70041,
journal = {Computer Graphics Forum}, title = {{
Axis-Normalized Ray-Box Intersection}},
author = {
Friederichs, Fabian
and
Benthin, Carsten
and
Grogorick, Steve
and
Eisemann, Elmar
and
Magnor, Marcus
and
Eisemann, Martin
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70041}
}
                
@article{
10.1111:cgf.70042,
journal = {Computer Graphics Forum}, title = {{
VortexTransformer: End-to-End Objective Vortex Detection in 2D Unsteady Flow Using Transformers}},
author = {
Zhang, Xingdi
and
Rautek, Peter
and
Hadwiger, Markus
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70042}
}
                
@article{
10.1111:cgf.70043,
journal = {Computer Graphics Forum}, title = {{
A Semi-Implicit SPH Method for Compressible and Incompressible Flows with Improved Convergence}},
author = {
He, Xiaowei
and
Liu, Shusen
and
Guo, Yuzhong
and
Shi, Jian
and
Qiao, Ying
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70043}
}
                
@article{
10.1111:cgf.70044,
journal = {Computer Graphics Forum}, title = {{
S-ACORD: Spectral Analysis of COral Reef Deformation}},
author = {
Alon-Borissiouk, Naama
and
Yuval, Matan
and
Treibitz, Tali
and
Ben-Chen, Mirela
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70044}
}
                
@article{
10.1111:cgf.70045,
journal = {Computer Graphics Forum}, title = {{
NePHIM: A Neural Physics-Based Head-Hand Interaction Model}},
author = {
Wagner, Nicolas
and
Schwanecke, Ulrich
and
Botsch, Mario
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70045}
}
                
@article{
10.1111:cgf.70046,
journal = {Computer Graphics Forum}, title = {{
HPRO: Direct Visibility of Point Clouds for Optimization}},
author = {
Katz, Sagi
and
Tal, Ayellet
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70046}
}
                
@article{
10.1111:cgf.70047,
journal = {Computer Graphics Forum}, title = {{
Optimizing Free-Form Grid Shells with Reclaimed Elements under Inventory Constraints}},
author = {
Favilli, Andrea
and
Laccone, Francesco
and
Cignoni, Paolo
and
Malomo, Luigi
and
Giorgi, Daniela
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70047}
}
                
@article{
10.1111:cgf.70048,
journal = {Computer Graphics Forum}, title = {{
Inverse Simulation of Radiative Thermal Transport}},
author = {
Freude, Christian
and
Lipp, Lukas
and
Zezulka, Matthias
and
Rist, Florian
and
Wimmer, Michael
and
Hahn, David
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70048}
}
                
@article{
10.1111:cgf.70049,
journal = {Computer Graphics Forum}, title = {{
Linearly Transformed Spherical Distributions for Interactive Single Scattering with Area Lights}},
author = {
Kt, Aakash
and
Shah, Ishaan
and
Narayanan, P. J.
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70049}
}
                
@article{
10.1111:cgf.70050,
journal = {Computer Graphics Forum}, title = {{
Neural Two-Level Monte Carlo Real-Time Rendering}},
author = {
Dereviannykh, Mikhail
and
Klepikov, Dmitrii
and
Hanika, Johannes
and
Dachsbacher, Carsten
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70050}
}
                
@article{
10.1111:cgf.70051,
journal = {Computer Graphics Forum}, title = {{
Adaptive Multi-view Radiance Caching for Heterogeneous Participating Media}},
author = {
Stadlbauer, Pascal
and
Tatzgern, Wolfgang
and
Mueller, Joerg H.
and
Winter, Martin
and
Stojanovic, Robert
and
Weinrauch, Alexander
and
Steinberger, Markus
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70051}
}
                
@article{
10.1111:cgf.70052,
journal = {Computer Graphics Forum}, title = {{
ASMR: Adaptive Skeleton-Mesh Rigging and Skinning via 2D Generative Prior}},
author = {
Hong, Seokhyeon
and
Choi, Soojin
and
Kim, Chaelin
and
Cha, Sihun
and
Noh, Junyong
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70052}
}
                
@article{
10.1111:cgf.70055,
journal = {Computer Graphics Forum}, title = {{
Image Vectorization via Gradient Reconstruction}},
author = {
Chakraborty, Souymodip
and
Batra, Vineet
and
Phogat, Ankit
and
Jain, Vishwas
and
Ranawat, Jaswant Singh
and
Dhingra, Sumit
and
Wampler, Kevin
and
Fisher, Matthew
and
Lukác, Michal
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70055}
}
                
@article{
10.1111:cgf.70056,
journal = {Computer Graphics Forum}, title = {{
Implicit UVs: Real-time Semi-global Parameterization of Implicit Surfaces}},
author = {
Genest, Baptiste
and
Gueth, Pierre
and
Levallois, Jérémy
and
Wang, Stephanie
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70056}
}
                
@article{
10.1111:cgf.70057,
journal = {Computer Graphics Forum}, title = {{
Lipschitz Pruning: Hierarchical Simplification of Primitive-Based SDFs}},
author = {
Barbier, Wilhem
and
Sanchez, Mathieu
and
Paris, Axel
and
Michel, Élie
and
Lambert, Thibaud
and
Boubekeur, Tamy
and
Paulin, Mathias
and
Thonat, Theo
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70057}
}
                
@article{
10.1111:cgf.70058,
journal = {Computer Graphics Forum}, title = {{
Real-Time Rendering Framework for Holography}},
author = {
Fricke, Sascha
and
Castillo, Susana
and
Eisemann, Martin
and
Magnor, Marcus
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70058}
}
                
@article{
10.1111:cgf.70059,
journal = {Computer Graphics Forum}, title = {{
Many-Light Rendering Using ReSTIR-Sampled Shadow Maps}},
author = {
Zhang, Song
and
Lin, Daqi
and
Wyman, Chris
and
Yuksel, Cem
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70059}
}
                
@article{
10.1111:cgf.70060,
journal = {Computer Graphics Forum}, title = {{
Generative Motion Infilling from Imprecisely Timed Keyframes}},
author = {
Goel, Purvi
and
Zhang, Haotian
and
Liu, C. Karen
and
Fatahalian, Kayvon
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70060}
}
                
@article{
10.1111:cgf.70061,
journal = {Computer Graphics Forum}, title = {{
Learning Metric Fields for Fast Low-Distortion Mesh Parameterizations}},
author = {
Fargion, Guy
and
Weber, Ofir
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70061}
}
                
@article{
10.1111:cgf.70062,
journal = {Computer Graphics Forum}, title = {{
SOBB: Skewed Oriented Bounding Boxes for Ray Tracing}},
author = {
Kácerik, Martin
and
Bittner, Jirí
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70062}
}
                
@article{
10.1111:cgf.70063,
journal = {Computer Graphics Forum}, title = {{
Towards Scaling-Invariant Projections for Data Visualization}},
author = {
Dierkes, Joel
and
Stelter, Daniel
and
Rössl, Christian
and
Theisel, Holger
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70063}
}
                
@article{
10.1111:cgf.70064,
journal = {Computer Graphics Forum}, title = {{
View-Dependent Visibility Optimization for Monte Carlo Volume Visualization}},
author = {
Lerzer, Nathan
and
Dachsbacher, Carsten
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70064}
}
                
@article{
10.1111:cgf.70065,
journal = {Computer Graphics Forum}, title = {{
Shape-Conditioned Human Motion Diffusion Model with Mesh Representation}},
author = {
Xue, Kebing
and
Seo, Hyewon
and
Bobenrieth, Cédric
and
Luo, Guoliang
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70065}
}
                
@article{
10.1111:cgf.70066,
journal = {Computer Graphics Forum}, title = {{
From Words to Wood: Text-to-Procedurally Generated Wood Materials}},
author = {
Hafidi, Mohcen
and
Wilkie, Alexander
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70066}
}
                
@article{
10.1111:cgf.70068,
journal = {Computer Graphics Forum}, title = {{
BlendSim: Simulation on Parametric Blendshapes using Spacetime Projective Dynamics}},
author = {
Wu, Yuhan
and
Umetani, Nobuyuki
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70068}
}
                
@article{
10.1111:cgf.70069,
journal = {Computer Graphics Forum}, title = {{
Learning Fast 3D Gaussian Splatting Rendering using Continuous Level of Detail}},
author = {
Milef, Nicholas
and
Seyb, Dario
and
Keeler, Todd
and
Nguyen-Phuoc, Thu
and
Bozic, Aljaz
and
Kondguli, Sushant
and
Marshall, Carl
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70069}
}

Browse

Recent Submissions

Now showing 1 - 60 of 75
  • Item
    All-frequency Full-body Human Image Relighting
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Tajima, Daichi; Kanamori, Yoshihiro; Endo, Yuki; Bousseau, Adrien; Day, Angela
    Relighting of human images enables post-photography editing of lighting effects in portraits. The current mainstream approach uses neural networks to approximate lighting effects without explicitly accounting for the principle of physical shading. As a result, it often has difficulty representing high-frequency shadows and shading. In this paper, we propose a two-stage relighting method that can reproduce physically-based shadows and shading from low to high frequencies. The key idea is to approximate an environment light source with a set of a fixed number of area light sources. The first stage employs supervised inverse rendering from a single image using neural networks and calculates physically-based shading. The second stage then calculates shadow for each area light and sums up to render the final image. We propose to make soft shadow mapping differentiable for the area-light approximation of environment lighting. We demonstrate that our method can plausibly reproduce all-frequency shadows and shading caused by environment illumination, which have been difficult to reproduce using existing methods.
  • Item
    Material Transforms from Disentangled NeRF Representations
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Lopes, Ivan; Lalonde, Jean-François; Charette, Raoul de; Bousseau, Adrien; Day, Angela
    In this paper, we first propose a novel method for transferring material transformations across different scenes. Building on disentangled Neural Radiance Field (NeRF) representations, our approach learns to map Bidirectional Reflectance Distribution Functions (BRDF) from pairs of scenes observed in varying conditions, such as dry and wet. The learned transformations can then be applied to unseen scenes with similar materials, therefore effectively rendering the transformation learned with an arbitrary level of intensity. Extensive experiments on synthetic scenes and real-world objects validate the effectiveness of our approach, showing that it can learn various transformations such as wetness, painting, coating, etc. Our results highlight not only the versatility of our method but also its potential for practical applications in computer graphics. We publish our method implementation, along with our synthetic/real datasets on https://github.com/astra-vision/BRDFTransform
  • Item
    Neural Face Skinning for Mesh-agnostic Facial Expression Cloning
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Cha, Sihun; Yoon, Serin; Seo, Kwanggyoon; Noh, Junyong; Bousseau, Adrien; Day, Angela
    Accurately retargeting facial expressions to a face mesh while enabling manipulation is a key challenge in facial animation retargeting. Recent deep-learning methods address this by encoding facial expressions into a global latent code, but they often fail to capture fine-grained details in local regions. While some methods improve local accuracy by transferring deformations locally, this often complicates overall control of the facial expression. To address this, we propose a method that combines the strengths of both global and local deformation models. Our approach enables intuitive control and detailed expression cloning across diverse face meshes, regardless of their underlying structures. The core idea is to localize the influence of the global latent code on the target mesh. Our model learns to predict skinning weights for each vertex of the target face mesh through indirect supervision from predefined segmentation labels. These predicted weights localize the global latent code, enabling precise and region-specific deformations even for meshes with unseen shapes. We supervise the latent code using Facial Action Coding System (FACS)-based blendshapes to ensure interpretability and allow straightforward editing of the generated animation. Through extensive experiments, we demonstrate improved performance over state-of-the-art methods in terms of expression fidelity, deformation transfer accuracy, and adaptability across diverse mesh structures.
  • Item
    FastAtlas: Real-Time Compact Atlases for Texture Space Shading
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Vining, Nicholas; Majercik, Zander; Gu, Floria; Takikawa, Towaki; Trusty, Ty; Lalonde, Paul; McGuire, Morgan; Sheffer, Alla; Bousseau, Adrien; Day, Angela
    Texture-space shading (TSS) methods decouple shading and rasterization, allowing shading to be performed at a different framerate and spatial resolution than rasterization. TSS has many potential applications, including streaming shading across networks, and reducing rendering cost via shading reuse across consecutive frames and/or shading at reduced resolutions relative to display resolution. Real-time TSS shading requires texture atlases small enough to be easily stored in GPU memory. Using static atlases leads to significant space wastage, motivating real-time per-frame atlassing strategies that pack only the content visible in each frame. We propose FastAtlas, a novel atlasing method that runs entirely on the GPU and is fast enough to be performed at interactive rates per-frame. Our method combines new per-frame chart computation and parametrization strategies and an efficient general chart packing algorithm. Our chartification strategy removes visible seams in output renders, and our parameterization ensures a constant texel-to-pixel ratio, avoiding undesirable undersampling artifacts. Our packing method is more general, and produces more tightly packed atlases, than previous work. Jointly, these innovations enable us to produce shading outputs of significantly higher visual quality than those produced using alternative atlasing strategies. We validate FastAtlas by shading and rendering challenging scenes using different atlasing settings, reflecting the needs of different TSS applications (temporal reuse, streaming, reduced or elevated shading rates). We extensively compare FastAtlas to prior alternatives and demonstrate that it achieves better shading quality and reduces texture stretch compared to prior approaches using the same settings.
  • Item
    Deformed Tiling and Blending: Application to the Correction of Distortions Implied by Texture Mapping
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Wendling, Quentin; Ravaglia, Joris; Sauvage, Basile; Bousseau, Adrien; Day, Angela
    The prevailing model in virtual 3D scenes is a 3D surface, which a texture is mapped onto, through a parameterization from the texture plane. We focus on accounting for the parameterization during the texture creation process, to control the deformations and remove the cuts induced by the mapping. We rely on the tiling and blending, a real-time and parallel algorithm that generates an arbitrary large texture from a small input example. Our first contribution is to enhance the tiling and blending with a deformation field, which controls smooth spatial variations in the texture plane. Our second contribution is to derive, from a parameterized triangle mesh, a deformation field to compensate for texture distortions and to control for the texture orientation. Our third contribution is a technique to enforce texture continuity across the cuts, thanks to a proper tile selection. This opens the door to interactive sessions with artistic control, and real-time rendering with improved visual quality.
  • Item
    NoPe-NeRF++: Local-to-Global Optimization of NeRF with No Pose Prior
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Shi, Dongbo; Cao, Shen; Wu, Bojian; Guo, Jinhui; Fan, Lubin; Chen, Renjie; Liu, Ligang; Ye, Jieping; Bousseau, Adrien; Day, Angela
    In this paper, we introduce NoPe-NeRF++, a novel local-to-global optimization algorithm for training Neural Radiance Fields (NeRF) without requiring pose priors. Existing methods, particularly NoPe-NeRF, which focus solely on the local relationships within images, often struggle to recover accurate camera poses in complex scenarios. To overcome the challenges, our approach begins with a relative pose initialization with explicit feature matching, followed by a local joint optimization to enhance the pose estimation for training a more robust NeRF representation. This method significantly improves the quality of initial poses. Additionally, we introduce global optimization phase that incorporates geometric consistency constraints through bundle adjustment, which integrates feature trajectories to further refine poses and collectively boost the quality of NeRF. Notably, our method is the first work that seamlessly combines the local and global cues with NeRF, and outperforms state-of-the-art methods in both pose estimation accuracy and novel view synthesis. Extensive evaluations on benchmark datasets demonstrate our superior performance and robustness, even in challenging scenes, thus validating our design choices.
  • Item
    SHLUT: Efficient Image Enhancement using Spatial-Aware High-Light Compensation Look-up Tables
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Chen, Xin; Li, Linge; Mu, Linhong; Chen, Yan; Guan, Jingwei; Bousseau, Adrien; Day, Angela
    Recently, the look-up table (LUT)-based method has achieved remarkable success in image enhancement tasks with its high efficiency and lightweight nature. However, when considering edge scenarios with limited computational resources, most existing methods fail to meet practical requirements due to their costly floating-point operations on convolution layers, which limit their general use. Moreover, most LUT-based methods may not perform well in handling high-light regions. To address these issues, we propose SHLUT, an efficient and practical image enhancement method by using spatial-aware high-light compensation look-up tables (LUTs), which comprise two parts. Firstly, we propose a spatial-aware weight predictor to reduce the computational burden. A lightweight network is trained to predict spatial-aware weight values, and then we transfer the values to the LUTs. Additionally, to correct overexposure in high-light regions, we propose a high-light compensation 3D LUT. Our proposed method allows us to directly retrieve the values from the LUTs to achieve efficient image enhancement at test time. Extensive experimental results demonstrate that SHLUT exhibits competitive performance compared to other LUT-based methods both quantitatively and qualitatively in a more efficient manner. For instance, SHLUT significantly reduces computational resources (at least 18 times in GFLOPs compared to other LUT-based methods), while excelling in high-light region handling.
  • Item
    Efficient Perspective-Correct 3D Gaussian Splatting Using Hybrid Transparency
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Hahlbohm, Florian; Friederichs, Fabian; Weyrich, Tim; Franke, Linus; Kappel, Moritz; Castillo, Susana; Stamminger, Marc; Eisemann, Martin; Magnor, Marcus; Bousseau, Adrien; Day, Angela
    3D Gaussian Splats (3DGS) have proven a versatile rendering primitive, both for inverse rendering as well as real-time exploration of scenes. In these applications, coherence across camera frames and multiple views is crucial, be it for robust convergence of a scene reconstruction or for artifact-free fly-throughs. Recent work started mitigating artifacts that break multi-view coherence, including popping artifacts due to inconsistent transparency sorting and perspective-correct outlines of (2D) splats. At the same time, real-time requirements forced such implementations to accept compromises in how transparency of large assemblies of 3D Gaussians is resolved, in turn breaking coherence in other ways. In our work, we aim at achieving maximum coherence, by rendering fully perspective-correct 3D Gaussians while using a high-quality approximation of accurate blending, hybrid transparency, on a per-pixel level, in order to retain real-time frame rates. Our fast and perspectively accurate approach for evaluation of 3D Gaussians does not require matrix inversions, thereby ensuring numerical stability and eliminating the need for special handling of degenerate splats, and the hybrid transparency formulation for blending maintains similar quality as fully resolved per-pixel transparencies at a fraction of the rendering costs. We further show that each of these two components can be independently integrated into Gaussian splatting systems. In combination, they achieve up to 2× higher frame rates, 2× faster optimization, and equal or better image quality with fewer rendering artifacts compared to traditional 3DGS on common benchmarks.
  • Item
    CEDRL: Simulating Diverse Crowds with Example-Driven Deep Reinforcement Learning
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Panayiotou, Andreas; Aristidou, Andreas; Charalambous, Panayiotis; Bousseau, Adrien; Day, Angela
    The level of realism in virtual crowds is strongly affected by the presence of diverse crowd behaviors. In real life, we can observe various scenarios, ranging from pedestrians moving on a shopping street, people talking in static groups, or wandering around in a public park. Most of the existing systems optimize for specific behaviors such as goal-seeking and collision avoidance, neglecting to consider other complex behaviors that are usually challenging to capture or define. Departing from the conventional use of Supervised Learning, which requires vast amounts of labeled data and often lacks controllability, we introduce Crowds using Example-driven Deep Reinforcement Learning (CEDRL), a framework that simultaneously leverages multiple crowd datasets to model a broad spectrum of human behaviors. This approach enables agents to adaptively learn and exhibit diverse behaviors, enhancing their ability to generalize decisions across unseen states. The model can be applied to populate novel virtual environments while providing real-time controllability over the agents' behaviors. We achieve this through the design of a reward function aligned with real-world observations and by employing curriculum learning that gradually diminishes the agents' observation space. A complexity characterization metric defines each agent's high-level crowd behavior, linking it to the agent's state and serving as an input to the policy network. Additionally, a parametric reward function, influenced by the type of crowd task, facilitates the learning of a diverse and abstract behavior ''skill'' set. We evaluate our model on both training and unseen real-world data, comparing against other simulators, showing its ability to generalize across scenarios and accurately reflect the observed complexity of behaviors. We also examine our system's controllability by adjusting the complexity weight, discovering that higher values lead to more complex behaviors such as wandering, static interactions, and group dynamics like joining or leaving. Finally, we demonstrate our model's capabilities in novel synthetic scenarios.
  • Item
    How to Train Your Dragon: Automatic Diffusion-Based Rigging for Characters with Diverse Topologies
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Gu, Zeqi; Liu, Difan; Langlois, Timothy; Fisher, Matthew; Davis, Abe; Bousseau, Adrien; Day, Angela
    Recent diffusion-based methods have achieved impressive results on animating images of human subjects. However, most of that success has built on human-specific body pose representations and extensive training with labeled real videos. In this work, we extend the ability of such models to animate images of characters with more diverse skeletal topologies. Given a small number (3-5) of example frames showing the character in different poses with corresponding skeletal information, our model quickly infers a rig for that character that can generate images corresponding to new skeleton poses. We propose a procedural data generation pipeline that efficiently samples training data with diverse topologies on the fly. We use it, along with a novel skeleton representation, to train our model on articulated shapes spanning a large space of textures and topologies. Then during fine-tuning, our model rapidly adapts to unseen target characters and generalizes well to rendering new poses, both for realistic and more stylized cartoon appearances. To better evaluate performance on this novel and challenging task, we create the first 2D video dataset that contains both humanoid and non-humanoid subjects with per-frame keypoint annotations. With extensive experiments, we demonstrate the superior quality of our results.
  • Item
    Physically Based Real-Time Rendering of Eclipses
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Schneegans, Simon; Gilg, Jonas; Ahlers, Volker; Zachmann, Gabriel; Gerndt, Andreas; Bousseau, Adrien; Day, Angela
    We present a novel approach for simulating eclipses, incorporating effects of light scattering and refraction in the occluder's atmosphere. Our approach not only simulates the eclipse shadow, but also allows for watching the Sun being eclipsed by the occluder. The latter is a spectacular sight which has never been seen by human eyes: For an observer on the lunar surface, the atmosphere around Earth turns into a glowing red ring as sunlight is refracted around the planet. To simulate this, we add three key contributions: First, we extend the Bruneton atmosphere model to simulate refraction. This allows light rays to be bent into the shadow cone. Refraction also adds realism to the atmosphere as it deforms and displaces the Sun during sunrise and sunset. Second, we show how to precompute the eclipse shadow using this extended atmosphere model. Third, we show how to efficiently visualize the glowing atmosphere ring around the occluder. Our approach produces visually accurate results suited for scientific visualizations, science communication, and video games. It is not limited to the Earth-Moon system, but can also be used to simulate the shadow of Mars and potentially other bodies. We demonstrate the physical soundness of our approach by comparing the results to reference data. Because no data is available for eclipses beyond the Earth-Moon system, we predict how an eclipse on a Martian moon will look like. Our implementation is available under the terms of the MIT license.
  • Item
    Versatile Physics-based Character Control with Hybrid Latent Representation
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Bae, Jinseok; Won, Jungdam; Lim, Donggeun; Hwang, Inwoo; Kim, Young Min; Bousseau, Adrien; Day, Angela
    We present a versatile latent representation that enables physically simulated character to efficiently utilize motion priors. To build a powerful motion embedding that is shared across multiple tasks, the physics controller should employ rich latent space that is easily explored and capable of generating high-quality motion. We propose integrating continuous and discrete latent representations to build a versatile motion prior that can be adapted to a wide range of challenging control tasks. Specifically, we build a discrete latent model to capture distinctive posterior distribution without collapse, and simultaneously augment the sampled vector with the continuous residuals to generate high-quality, smooth motion without jittering. We further incorporate Residual Vector Quantization, which not only maximizes the capacity of the discrete motion prior, but also efficiently abstracts the action space during the task learning phase. We demonstrate that our agent can produce diverse yet smooth motions simply by traversing the learned motion prior through unconditional motion generation. Furthermore, our model robustly satisfies sparse goal conditions with highly expressive natural motions, including head-mounted device tracking and motion in-betweening at irregular intervals, which could not be achieved with existing latent representations.
  • Item
    Rest Shape Optimization for Sag-Free Discrete Elastic Rods
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Takahashi, Tetsuya; Batty, Christopher; Bousseau, Adrien; Day, Angela
    We propose a new rest shape optimization framework to achieve sag-free simulations of discrete elastic rods. To optimize rest shape parameters, we formulate a minimization problem based on the kinetic energy with a regularizer while imposing box constraints on these parameters to ensure the system's stability. Our method solves the resulting constrained minimization problem via the Gauss-Newton algorithm augmented with penalty methods. We demonstrate that the optimized rest shape parameters enable discrete elastic rods to achieve static equilibrium for a wide range of strand geometries and material parameters.
  • Item
    REED-VAE: RE-Encode Decode Training for Iterative Image Editing with Diffusion Models
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Almog, Gal; Shamir, Ariel; Fried, Ohad; Bousseau, Adrien; Day, Angela
    While latent diffusion models achieve impressive image editing results, their application to iterative editing of the same image is severely restricted. When trying to apply consecutive edit operations using current models, they accumulate artifacts and noise due to repeated transitions between pixel and latent spaces. Some methods have attempted to address this limitation by performing the entire edit chain within the latent space, sacrificing flexibility by supporting only a limited, predetermined set of diffusion editing operations. We present a re-encode decode (REED) training scheme for variational autoencoders (VAEs), which promotes image quality preservation even after many iterations. Our work enables multi-method iterative image editing: users can perform a variety of iterative edit operations, with each operation building on the output of the previous one using both diffusion based operations and conventional editing techniques. We demonstrate the advantage of REED-VAE across a range of image editing scenarios, including text-based and mask-based editing frameworks. In addition, we show how REEDVAE enhances the overall editability of images, increasing the likelihood of successful and precise edit operations. We hope that this work will serve as a benchmark for the newly introduced task of multi-method image editing.
  • Item
    Neural Geometry Processing via Spherical Neural Surfaces
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Williamson, Romy; Mitra, Niloy J.; Bousseau, Adrien; Day, Angela
    Neural surfaces (e.g., neural map encoding, deep implicit, and neural radiance fields) have recently gained popularity because of their generic structure (e.g., multi-layer perceptron) and easy integration with modern learning-based setups. Traditionally, we have a rich toolbox of geometry processing algorithms designed for polygonal meshes to analyze and operate on surface geometry. Without an analogous toolbox, neural representations are typically discretized and converted into a mesh, before applying any geometry processing algorithm. This is unsatisfactory and, as we demonstrate, unnecessary. In this work, we propose a spherical neural surface representation for genus-0 surfaces and demonstrate how to compute core geometric operators directly on this representation. Namely, we estimate surface normals and first and second fundamental forms of the surface, as well as compute surface gradient, surface divergence and Laplace Beltrami operator on scalar/vector fields defined on the surface. Our representation is fully seamless, overcoming a key limitation of similar explicit representations such as Neural Surface Maps [MAKM21]. These operators, in turn, enable geometry processing directly on the neural representations without any unnecessary meshing. We demonstrate illustrative applications in (neural) spectral analysis, heat flow and mean curvature flow, and evaluate robustness to isometric shape variations. We propose theoretical formulations and validate their numerical estimates, against analytical estimates, mesh-based baselines, and neural alternatives, where available. By systematically linking neural surface representations with classical geometry processing algorithms, we believe this work can become a key ingredient in enabling neural geometry processing. Code is available via the project webpage.
  • Item
    Corotational Hinge-based Thin Plates/Shells
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Liang, Qixin; Bousseau, Adrien; Day, Angela
    We present six thin plate/shell models, derived from three distinct types of curvature operators formulated within the corotational frame, for simulating both rest-flat and rest-curved triangular meshes. Each curvature operator derives a curvature expression corresponding to both a plate model and a shell model. The corotational edge-based hinge model uses an edge-based stencil to compute directional curvature, while the corotational FVM hinge model utilizes a triangle-centered stencil, applying the finite volume method (FVM) to superposition directional curvatures across edges, yielding a generalized curvature. The corotational smoothed hinge model also employs a triangle-centered stencil but transforms directional curvatures into a generalized curvature based on a quadratic surface fit. All models assume small strain and small curvature, leading to constant bending energy Hessians, which benefit implicit integrators. Through quantitative benchmarks and qualitative elastodynamic simulations with large time steps, we demonstrate the accuracy, efficiency, and stability of these models. Our contributions enhance the thin plate/shell library for use in both computer graphics and engineering applications.
  • Item
    2D Neural Fields with Learned Discontinuities
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Liu, Chenxi; Wang, Siqi; Fisher, Matthew; Aneja, Deepali; Jacobson, Alec; Bousseau, Adrien; Day, Angela
    Effective representation of 2D images is fundamental in digital image processing, where traditional methods like raster and vector graphics struggle with sharpness and textural complexity, respectively. Current neural fields offer high fidelity and resolution independence but require predefined meshes with known discontinuities, restricting their utility. We observe that by treating all mesh edges as potential discontinuities, we can represent the discontinuity magnitudes as continuous variables and optimize. We further introduce a novel discontinuous neural field model that jointly approximates the target image and recovers discontinuities. Through systematic evaluations, our neural field outperforms other methods that fit unknown discontinuities with discontinuous representations, exceeding Field of Junction and Boundary Attention by over 11dB in both denoising and super-resolution tasks and achieving 3.5× smaller Chamfer distances than Mumford-Shah-based methods. It also surpasses InstantNGP with improvements of more than 5dB (denoising) and 10dB (super-resolution). Additionally, our approach shows remarkable capability in approximating complex artistic and natural images and cleaning up diffusion-generated depth maps.
  • Item
    Approximating Procedural Models of 3D Shapes with Neural Networks
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Hossain, Ishtiaque; Shen, I-Chao; Kaick, Oliver van; Bousseau, Adrien; Day, Angela
    Procedural modeling is a popular technique for 3D content creation and offers a number of advantages over alternative techniques for modeling 3D shapes. However, given a procedural model, predicting the procedural parameters of existing data provided in different modalities can be challenging. This is because the data may be in a different representation than the one generated by the procedural model, and procedural models are usually not invertible, nor are they differentiable. In this paper, we address these limitations and introduce an invertible and differentiable representation for procedural models. We approximate parameterized procedures with a neural network architecture NNProc that learns both the forward and inverse mapping of the procedural model by aligning the latent spaces of shape parameters and shapes. The network is trained in a manner that is agnostic to the inner workings of the procedural model, implying that models implemented in different languages or systems can be used. We demonstrate how the proposed representation can be used for both forward and inverse procedural modeling. Moreover, we show how NNProc can be used in conjunction with optimization for applications such as shape reconstruction from an image or a 3D Gaussian Splatting.
  • Item
    Multi-Modal Instrument Performances (MMIP): A Musical Database
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Kyriakou, Theodoros; Aristidou, Andreas; Charalambous, Panayiotis; Bousseau, Adrien; Day, Angela
    Musical instrument performances are multimodal creative art forms that integrate audiovisual elements, resulting from musicians' interactions with instruments through body movements, finger actions, and facial expressions. Digitizing such performances for archiving, streaming, analysis, or synthesis requires capturing every element that shapes the overall experience, which is crucial for preserving the performance's essence. In this work, following current trends in large-scale dataset development for deep learning analysis and generative models, we introduce the Multi-Modal Instrument Performances (MMIP) database (https://mmip.cs.ucy.ac.cy). This is the first dataset to incorporate synchronized high-quality 3D motion capture data for the body, fingers, facial expressions, and instruments, along with audio, multi-angle videos, and MIDI data. The database currently includes 3.5 hours of performances featuring three instruments: guitar, piano, and drums. Additionally, we discuss the challenges of acquiring these multi-modal data, detailing our approach to data collection, signal synchronization, annotation, and metadata management. Our data formats align with industry standards for ease of use, and we have developed an open-access online repository that offers a user-friendly environment for data exploration, supporting data organization, search capabilities, and custom visualization tools. Notable features include a MIDI-to-instrument animation project for visualizing the instruments and a script for playing back FBX files with synchronized audio in a web environment.
  • Item
    DragPoser: Motion Reconstruction from Variable Sparse Tracking Signals via Latent Space Optimization
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Ponton, Jose Luis; Pujol, Eduard; Aristidou, Andreas; Andujar, Carlos; Pelechano, Nuria; Bousseau, Adrien; Day, Angela
    High-quality motion reconstruction that follows the user's movements can be achieved by high-end mocap systems with many sensors. However, obtaining such animation quality with fewer input devices is gaining popularity as it brings mocap closer to the general public. The main challenges include the loss of end-effector accuracy in learning-based approaches, or the lack of naturalness and smoothness in IK-based solutions. In addition, such systems are often finely tuned to a specific number of trackers and are highly sensitive to missing data, e.g., in scenarios where a sensor is occluded or malfunctions. In response to these challenges, we introduce DragPoser, a novel deep-learning-based motion reconstruction system that accurately represents hard and dynamic constraints, attaining real-time high end-effectors position accuracy. This is achieved through a pose optimization process within a structured latent space. Our system requires only one-time training on a large human motion dataset, and then constraints can be dynamically defined as losses, while the pose is iteratively refined by computing the gradients of these losses within the latent space. To further enhance our approach, we incorporate a Temporal Predictor network, which employs a Transformer architecture to directly encode temporality within the latent space. This network ensures the pose optimization is confined to the manifold of valid poses and also leverages past pose data to predict temporally coherent poses. Results demonstrate that DragPoser surpasses both IK-based and the latest data-driven methods in achieving precise end-effector positioning, while it produces natural poses and temporally coherent motion. In addition, our system showcases robustness against on-the-fly constraint modifications, and exhibits adaptability to various input configurations and changes. The complete source code, trained model, animation databases, and supplementary material used in this paper can be found at https://upc-virvig.github.io/DragPoser
  • Item
    Eigenvalue Blending for Projected Newton
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Cheng, Yuan-Yuan; Liu, Ligang; Fu, Xiao-Ming; Bousseau, Adrien; Day, Angela
    We propose a novel method to filter eigenvalues for projected Newton. Central to our method is blending the clamped and absolute eigenvalues to adaptively compute the modified Hessian matrix. To determine the blending coefficients, we rely on (1) a key observation and (2) an objective function descent constraint. The observation is that if the quadratic form defined by the Hessian matrix maps the descent direction to a negative real number, the decrease in the objective function is limited. The constraint is that our eigenvalue filtering leads to more reduction in objective function than the absolute eigenvalue filtering [CLL∗24] in the case of second-order Taylor approximation. Our eigenvalue blending is easy to implement and leads to fewer optimization iterations than the state-of-the-art eigenvalue filtering methods.
  • Item
    ReConForM: Real-time Contact-aware Motion Retargeting for more Diverse Character Morphologies
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Cheynel, Théo; Rossi, Thomas; Bellot-Gurlet, Baptiste; Rohmer, Damien; Cani, Marie-Paule; Bousseau, Adrien; Day, Angela
    Preserving semantics, in particular in terms of contacts, is a key challenge when retargeting motion between characters of different morphologies. Our solution relies on a low-dimensional embedding of the character's mesh, based on rigged key vertices that are automatically transferred from the source to the target. Motion descriptors are extracted from the trajectories of these key vertices, providing an embedding that contains combined semantic information about both shape and pose. A novel, adaptive algorithm is then used to automatically select and weight the most relevant features over time, enabling us to efficiently optimize the target motion until it conforms to these constraints, so as to preserve the semantics of the source motion. Our solution allows extensions to several novel use-cases where morphology and mesh contacts were previously overlooked, such as multi-character retargeting and motion transfer on uneven terrains. As our results show, our method is able to achieve real-time retargeting onto a wide variety of characters. Extensive experiments and comparison with state-of-the-art methods using several relevant metrics demonstrate improved results, both in terms of motion smoothness and contact accuracy.
  • Item
    A Unified Discrete Collision Framework for Triangle Primitives
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Kikuchi, Tomoyo; Kanai, Takashi; Bousseau, Adrien; Day, Angela
    We present a unified, primitive-first framework with DCD for collision response in physics-based simulations. Previous methods do not provide sufficient solutions on a framework that resolves edge-triangle and edge-edge collisions when handling selfcollisions and inter-object collisions in a unified manner. We define a scalar function and its gradient, representing the distance between two triangles and the movement direction for collision response, respectively. The resulting method offers an effective solution for collisions with minor computational overhead and robustness for any type of deformable object, such as solids or cloth. The algorithm is conceptually simple and easy to implement. When using PBD/XPBD, it is straightforward to incorporate our method into a collision constraint.
  • Item
    A Multimodal Personality Prediction Framework based on Adaptive Graph Transformer Network and Multi-task Learning
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Wang, Rongquan; Zhao, Xile; Xu, Xianyu; Hao, Yang; Bousseau, Adrien; Day, Angela
    Multimodal personality analysis targets accurately detecting personality traits by incorporating related multimodal information. However, existing methods focus on unimodal features while overlooking the bimodal association features crucial for this interdisciplinary task. Therefore, we propose a multimodal personality prediction framework based on an adaptive graph transformer network and multi-task learning. Firstly, we utilize pre-trained models to learn specific representations from different modalities. Here, we employ pre-trained multimodal models' encoders as the backbones of the modality-specific extraction methods to mine unimodal features. Specifically, we introduce a novel adaptive graph transformer network to mine personalityrelated bimodal association features. This network effectively learns higher-order temporal dependencies based on relational graphs and emphasizes more significant features. Furthermore, we utilize a multimodal channel attention residual fusion module to obtain the fused features, and we propose a multimodal and unimodal joint learning regression head to learn and predict scores for personality traits. We design a multi-task loss function to enhance the robustness and accuracy of personality prediction. Experimental results on the two benchmark datasets demonstrate the effectiveness of our framework, which outperforms the state-of-the-art methods. The code is available at https://github.com/RongquanWang/PPF-AGTNMTL.
  • Item
    Cloth Animation with Time-dependent Persistent Wrinkles
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Gong, Deshan; Yang, Yin; Shao, Tianjia; Wang, He; Bousseau, Adrien; Day, Angela
    Persistent wrinkles are often observed on crumpled garments e.g., the wrinkles around the knees after sitting for a while. Such wrinkles can be easily recovered if not deformed for long, and otherwise be persistent. Since they are vital to the visual realism of cloth animation, we aim to simulate realistic looking persistent wrinkles. To this end, we present a physics-inspired finegrained wrinkle model. Different from existing methods, we recognize the importance of the interplay between internal friction and plasticity during wrinkle formation. Furthermore, we model their time dependence for persistent wrinkles. Our model is capable of not only simulating realistic wrinkle patterns, but also their time-dependent changes according to how long the deformation is maintained. Through extensive experiments, we show that our model is effective in simulating realistic spatial and temporal varying wrinkles, versatile in simulating different materials, and capable of generating more fine-grained wrinkles than the state of the art.
  • Item
    Does 3D Gaussian Splatting Need Accurate Volumetric Rendering?
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Celarek, Adam; Kopanas, Georgios; Drettakis, George; Wimmer, Michael; Kerbl, Bernhard; Bousseau, Adrien; Day, Angela
    Since its introduction, 3D Gaussian Splatting (3DGS) has become an important reference method for learning 3D representations of a captured scene, allowing real-time novel-view synthesis with high visual quality and fast training times. Neural Radiance Fields (NeRFs), which preceded 3DGS, are based on a principled ray-marching approach for volumetric rendering. In contrast, while sharing a similar image formation model with NeRF, 3DGS uses a hybrid rendering solution that builds on the strengths of volume rendering and primitive rasterization. A crucial benefit of 3DGS is its performance, achieved through a set of approximations, in many cases with respect to volumetric rendering theory. A naturally arising question is whether replacing these approximations with more principled volumetric rendering solutions can improve the quality of 3DGS. In this paper, we present an in-depth analysis of the various approximations and assumptions used by the original 3DGS solution. We demonstrate that, while more accurate volumetric rendering can help for low numbers of primitives, the power of efficient optimization and the large number of Gaussians allows 3DGS to outperform volumetric rendering despite its approximations.
  • Item
    A Unified Multi-scale Method for Simulating Immersed Bubbles
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Wretborn, Joel; Stomakhin, Alexey; Batty, Christopher; Bousseau, Adrien; Day, Angela
    We introduce a novel unified mixture-based method for simulating underwater bubbles across a range of bubble scales. Our approach represents bubbles as a set of Lagrangian particles that are coupled with the surrounding Eulerian water volume. When bubble particles are sparsely distributed, each particle, typically smaller than the liquid grid voxel size, corresponds to an individual spherical bubble. As the sub-grid particles increase in local density our model smoothly aggregates them, ultimately forming connected, fully aerated volumetric regions that are properly resolved by the Eulerian grid. We complement our scheme with a continuous surface tension model, defined via the gradient of the bubbles' local volume fractions, which works seamlessly across this scale transition. Our unified representation allows us to capture a wide range of effects across different scales-from tiny dispersed sub-grid air pockets to fully Eulerian two-phase interfacial flows.
  • Item
    StyleBlend: Enhancing Style-Specific Content Creation in Text-to-Image Diffusion Models
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Chen, Zichong; Wang, Shijin; Zhou, Yang; Bousseau, Adrien; Day, Angela
    Synthesizing visually impressive images that seamlessly align both text prompts and specific artistic styles remains a significant challenge in Text-to-Image (T2I) diffusion models. This paper introduces StyleBlend, a method designed to learn and apply style representations from a limited set of reference images, enabling content synthesis of both text-aligned and stylistically coherent. Our approach uniquely decomposes style into two components, composition and texture, each learned through different strategies. We then leverage two synthesis branches, each focusing on a corresponding style component, to facilitate effective style blending through shared features without affecting content generation. StyleBlend addresses the common issues of text misalignment and weak style representation that previous methods have struggled with. Extensive qualitative and quantitative comparisons demonstrate the superiority of our approach.
  • Item
    Preconditioned Single-step Transforms for Non-rigid ICP
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Jung, Yucheol; Kim, Hyomin; Yoon, Hyejeong; Lee, Seungyong; Bousseau, Adrien; Day, Angela
    Non-rigid iterative closest point (ICP) is a popular framework for shape alignment, typically formulated as alternating iteration of correspondence search and shape transformation. A common approach in the shape transformation stage is to solve a linear least squares problem to find a smoothness-regularized transform that fits the target shape. However, completely solving the linear least squares problem to obtain a transform is wasteful because the correspondences used for constructing the problem are imperfect, especially at early iterations. In this work, we design a novel framework to compute a transform in single step without the exact linear solve. Our key idea is to use only a single step of an iterative linear system solver, conjugate gradient, at each shape transformation stage. For this single-step scheme to be effective, appropriate preconditioning of the linear system is required. We design a novel adaptive Sobolev-Jacobi preconditioning method for our single-step transform to produce a large and regularized shape update suitable for correspondence search in the next iteration. We demonstrate that our preconditioned single-step transform stably accelerates challenging 3D surface registration tasks.
  • Item
    FlairGPT: Repurposing LLMs for Interior Designs
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Littlefair, Gabrielle; Dutt, Niladri Shekhar; Mitra, Niloy J.; Bousseau, Adrien; Day, Angela
    Interior design involves the careful selection and arrangement of objects to create an aesthetically pleasing, functional, and harmonized space that aligns with the client's design brief. This task is particularly challenging, as a successful design must not only incorporate all the necessary objects in a cohesive style, but also ensure they are arranged in a way that maximizes accessibility, while adhering to a variety of affordability and usage considerations. Data-driven solutions have been proposed, but these are typically room- or domain-specific and lack explainability in their design design considerations used in producing the final layout. In this paper, we investigate if large language models (LLMs) can be directly utilized for interior design. While we find that LLMs are not yet capable of generating complete layouts, they can be effectively leveraged in a structured manner, inspired by the workflow of interior designers. By systematically probing LLMs, we can reliably generate a list of objects along with relevant constraints that guide their placement. We translate this information into a design layout graph, which is then solved using an off-the-shelf constrained optimization setup to generate the final layouts. We benchmark our algorithm in various design configurations against existing LLM-based methods and human designs, and evaluate the results using a variety of quantitative and qualitative metrics along with user studies. In summary, we demonstrate that LLMs, when used in a structured manner, can effectively generate diverse high-quality layouts, making them a viable solution for creating large-scale virtual scenes. Code is available via the project webpage.
  • Item
    Isosurface Extraction for Signed Distance Functions using Power Diagrams
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Kohlbrenner, Maximilian; Alexa, Marc; Bousseau, Adrien; Day, Angela
    Contouring an implicit function typically considers function values in the vicinity of the desired level set, only. In a recent string of works, Sellán at al. have demonstrated that signed distance values contain useful information also if they are further away from the surface. This can be exploited to increase the resolution and amount of detail in surface reconstruction from signed distance values. We argue that the right tool for this analysis is a regular triangulation of the distance samples, with the weights chosen based on the distance values. The resulting triangulation is better suited for reconstructing the surface than a standard Delaunay triangulation of the samples. Moreover, the dual power diagram encodes the envelope enclosing the surface, consisting of spherical caps. We discuss how this information can be exploited for reconstructing the surface. In particular, the approach based on regular triangulations lends itself well to refining the sample set. Refining the sample set based on the power diagram outperforms other reconstruction methods relative to the sample count.
  • Item
    D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Kappel, Moritz; Hahlbohm, Florian; Scholz, Timon; Castillo, Susana; Theobalt, Christian; Eisemann, Martin; Golyanik, Vladislav; Magnor, Marcus; Bousseau, Adrien; Day, Angela
    Dynamic reconstruction and spatiotemporal novel-view synthesis of non-rigidly deforming scenes recently gained increased attention. While existing work achieves impressive quality and performance on multi-view or teleporting camera setups, most methods fail to efficiently and faithfully recover motion and appearance from casual monocular captures. This paper contributes to the field by introducing a new method for dynamic novel view synthesis from monocular video, such as casual smartphone captures. Our approach represents the scene as a dynamic neural point cloud, an implicit time-conditioned point distribution that encodes local geometry and appearance in separate hash-encoded neural feature grids for static and dynamic regions. By sampling a discrete point cloud from our model, we can efficiently render high-quality novel views using a fast differentiable rasterizer and neural rendering network. Similar to recent work, we leverage advances in neural scene analysis by incorporating data-driven priors like monocular depth estimation and object segmentation to resolve motion and depth ambiguities originating from the monocular captures. In addition to guiding the optimization process, we show that these priors can be exploited to explicitly initialize our scene representation to drastically improve optimization speed and final image quality. As evidenced by our experimental evaluation, our dynamic point cloud model not only enables fast optimization and real-time frame rates for interactive applications, but also achieves competitive image quality on monocular benchmark sequences. Our code and data are available online https://moritzkappel.github.io/projects/dnpc/.
  • Item
    Text-Guided Interactive Scene Synthesis with Scene Prior Guidance
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Fang, Shaoheng; Yang, Haitao; Mooney, Raymond; Huang, Qixing; Bousseau, Adrien; Day, Angela
    3D scene synthesis using natural language instructions has become a popular direction in computer graphics, with significant progress made by data-driven generative models recently. However, previous methods have mainly focused on one-time scene generation, lacking the interactive capability to generate, update, or correct scenes according to user instructions. To overcome this limitation, this paper focuses on text-guided interactive scene synthesis. First, we introduce the SceneMod dataset, which comprises 168k paired scenes with textual descriptions of the modifications. To support the interactive scene synthesis task, we propose a two-stage diffusion generative model that integrates scene-prior guidance into the denoising process to explicitly enforce physical constraints and foster more realistic scenes. Experimental results demonstrate that our approach outperforms baseline methods in text-guided scene synthesis tasks. Our system expands the scope of data-driven scene synthesis tasks and provides a novel, more flexible tool for users and designers in 3D scene generation. Code and dataset are available at https://github.com/bshfang/SceneMod.
  • Item
    Differential Diffusion: Giving Each Pixel Its Strength
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Levin, Eran; Fried, Ohad; Bousseau, Adrien; Day, Angela
    Diffusion models have revolutionized image generation and editing, producing state-of-the-art results in conditioned and unconditioned image synthesis. While current techniques enable user control over the degree of change in an image edit, the controllability is limited to global changes over an entire edited region. This paper introduces a novel framework that enables customization of the amount of change per pixel or per image region. Our framework can be integrated into any existing diffusion model, enhancing it with this capability. Such granular control opens up a diverse array of new editing capabilities, such as control of the extent to which individual objects are modified, or the ability to introduce gradual spatial changes. Furthermore, we showcase the framework's effectiveness in soft-inpainting-the completion of portions of an image while subtly adjusting the surrounding areas to ensure seamless integration. Additionally, we introduce a new tool for exploring the effects of different change quantities. Our framework operates solely during inference, requiring no model training or fine-tuning. We demonstrate our method with the current open state-of-the-art models, and validate it via both quantitative and qualitative comparisons, and a user study. Our code is published and integrated into several platforms.
  • Item
    Axis-Normalized Ray-Box Intersection
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Friederichs, Fabian; Benthin, Carsten; Grogorick, Steve; Eisemann, Elmar; Magnor, Marcus; Eisemann, Martin; Bousseau, Adrien; Day, Angela
    Ray-axis aligned bounding box intersection tests play a crucial role in the runtime performance of many rendering applications, driven not by complexity but mainly by the volume of tests required. While existing solutions were believed to be pretty much optimal in terms of runtime on current hardware, our paper introduces a new intersection test requiring fewer arithmetic operations compared to all previous methods. By transforming the ray we eliminate the need for one third of the traditional bounding-slab tests and achieve a speed enhancement of approximately 13.8% or 10.9%, depending on the compiler.We present detailed runtime analyses in various scenarios.
  • Item
    VortexTransformer: End-to-End Objective Vortex Detection in 2D Unsteady Flow Using Transformers
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Zhang, Xingdi; Rautek, Peter; Hadwiger, Markus; Bousseau, Adrien; Day, Angela
    Vortex structures play a pivotal role in understanding complex fluid dynamics, yet defining them rigorously remains challenging. One hard criterion is that a vortex detector must be objective, i.e., it needs to be indifferent to reference frame transformations. We propose VortexTransformer, a novel deep learning approach using point transformer architectures to directly extract vortex structures from pathlines. Unlike traditional methods that rely on grid-based velocity fields in the Eulerian frame, our approach operates entirely on a Lagrangian representation of the flow field (i.e., pathlines), enabling objective identification of both strong and weak vortex structures. To train VortexTransformer, we generate a large synthetic dataset using parametric flow models to simulate diverse vortex configurations, ensuring a robust ground truth. We compare our method against CNN and UNet architectures, applying the trained models to real-world flow datasets. VortexTransformer is an end-to-end detector, which means that reference frame transformations as well as vortex detection are handled implicitly by the network, demonstrating the ability to extract vortex boundaries without the need for parameters such as arbitrary thresholds, or an explicit definition of a vortex. Our method offers a new approach to determining objective vortex labels by using the objective pairwise distances of material points for vortex detection and is adaptable to various flow conditions.
  • Item
    A Semi-Implicit SPH Method for Compressible and Incompressible Flows with Improved Convergence
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) He, Xiaowei; Liu, Shusen; Guo, Yuzhong; Shi, Jian; Qiao, Ying; Bousseau, Adrien; Day, Angela
    In simulating fluids using position-based dynamics, the accuracy and robustness depend on numerous numerical parameters, including the time step size, iteration count, and particle size, among others. This complexity can lead to unpredictable control of simulation behaviors. In this paper, we first reformulate the problem of enforcing fluid compressibility/incompressibility into an nonlinear optimization problem, and then introduce a semi-implicit successive substitution method (SISSM) to solve the nonlinear optimization problem by adjusting particle positions in parallel. In contrast to calculating an intermediate variable, such as pressure, to enforce fluid incompressibility within the position-based dynamics (PBD) framework, the proposed semiimplicit approach eliminates the necessity of such calculations. Instead, it directly employs successive substitution of particle positions to correct density errors. This method exhibits reduced dependency to numerical parameters, such as particle size and time step variations, and improves consistency and stability in simulating fluids that range from highly compressible to nearly incompressible. We validates the effectiveness of applying a variety of different techniques in accelerating the convergence rate.
  • Item
    S-ACORD: Spectral Analysis of COral Reef Deformation
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Alon-Borissiouk, Naama; Yuval, Matan; Treibitz, Tali; Ben-Chen, Mirela; Bousseau, Adrien; Day, Angela
    We propose an efficient pipeline to register, detect, and analyze changes in 3D models of coral reefs captured over time. Corals have complex structures with intricate geometric features at multiple scales. 3D reconstructions of corals (e.g., using Photogrammetry) are represented by dense triangle meshes with millions of vertices. Hence, identifying correspondences quickly using conventional state-of-the-art algorithms is challenging. To address this gap we employ the Globally Optimal Iterative Closest Point (GO-ICP) algorithm to compute correspondences, and a fast approximation algorithm (FastSpectrum) to extract the eigenvectors of the Laplace-Beltrami operator for creating functional maps. Finally, by visualizing the distortion of these maps we identify changes in the coral reefs over time. Our approach is fully automatic, does not require user specified landmarks or an initial map, and surpasses competing shape correspondence methods on coral reef models. Furthermore, our analysis has detected the changes manually marked by humans, as well as additional changes at a smaller scale that were missed during manual inspection. We have additionally used our system to analyse a coral reef model that was too extensive for manual analysis, and validated that the changes identified by the system were correct.
  • Item
    NePHIM: A Neural Physics-Based Head-Hand Interaction Model
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Wagner, Nicolas; Schwanecke, Ulrich; Botsch, Mario; Bousseau, Adrien; Day, Angela
    Due to the increasing use of virtual avatars, the animation of head-hand interactions has recently gained attention. To this end, we present a novel volumetric and physics-based interaction simulation. In contrast to previous work, our simulation incorporates temporal effects such as collision paths, respects anatomical constraints, and can detect and simulate skin pulling. As a result, we can achieve more natural-looking interaction animations and take a step towards greater realism. However, like most complex and computationally expensive simulations, ours is not real-time capable even on high-end machines. Therefore, we train small and efficient neural networks as accurate approximations that achieve about 200 FPS on consumer GPUs, about 50 FPS on CPUs, and are learned in less than four hours for one person. In general, our focus is not to generalize the approximation networks to low-resolution head models but to adapt them to more detailed personalized avatars. Nevertheless, we show that these networks can learn to approximate our head-hand interaction model for multiple identities while maintaining computational efficiency. Since the quality of the simulations can only be judged subjectively, we conducted a comprehensive user study which confirms the improved realism of our approach. In addition, we provide extensive visual results and inspect the neural approximations quantitatively. All data used in this work has been recorded with a multi-view camera rig. Code and data are available at https://gitlab.cs.hs-rm.de/cvmr_releases/HeadHand.
  • Item
    HPRO: Direct Visibility of Point Clouds for Optimization
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Katz, Sagi; Tal, Ayellet; Bousseau, Adrien; Day, Angela
    Given a point cloud, which is assumed to be a sampling of a continuous surface, and a viewpoint, which points are visible from that viewpoint? Since points do not occlude each other, the real question is which points would be visible if the surface they were sampled from were known. While an existing approximation method addresses this problem, it is unsuitable for use in optimization processes or learning models due to its lack of differentiability. To overcome this limitation, the paper introduces a novel differentiable approximation method. It is based on identifying the extreme points of a point set in a differentiable manner. This approach can be effectively integrated into optimization algorithms or used as a layer in neural networks, allowing for the computation and utilization of visible points in various tasks, such as optimal viewpoint selection. The paper also provides theoretical proofs of the operator's correctness in the limit, further validating its effectiveness. The code is available at https://github.com/sagikatz/HPRO
  • Item
    Optimizing Free-Form Grid Shells with Reclaimed Elements under Inventory Constraints
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Favilli, Andrea; Laccone, Francesco; Cignoni, Paolo; Malomo, Luigi; Giorgi, Daniela; Bousseau, Adrien; Day, Angela
    We propose a method for designing 3D architectural free-form surfaces, represented as grid shells with beams sourced from inventories of reclaimed elements from dismantled buildings. In inventory-constrained design, the reused elements must be paired with elements in the target design. Traditional solutions to this assignment problem often result in cuts and material waste or geometric distortions that affect the surface aesthetics and buildability. Our method for inventory-constrained assisted design blends the traditional assignment problem with differentiable geometry optimization to reduce cut-off waste while preserving the design intent. Additionally, we extend our approach to incorporate strain energy minimization for structural efficiency. We design differentiable losses that account for inventory, geometry, and structural constraints, and streamline them into a complete pipeline, demonstrated through several case studies. Our approach enables the reuse of existing elements for new designs, reducing the need for sourcing new materials and disposing of waste. Consequently, it can serve as an initial step towards mitigating the significant environmental impact of the construction sector.
  • Item
    Inverse Simulation of Radiative Thermal Transport
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Freude, Christian; Lipp, Lukas; Zezulka, Matthias; Rist, Florian; Wimmer, Michael; Hahn, David; Bousseau, Adrien; Day, Angela
    The early phase of urban planning and architectural design has a great impact on the thermal loads and characteristics of constructed buildings. It is, therefore, important to efficiently simulate thermal effects early on and rectify possible problems. In this paper, we present an inverse simulation of radiative heat transport and a differentiable photon-tracing approach. Our method utilizes GPU-accelerated ray tracing to speed up both the forward and adjoint simulation. Moreover, we incorporate matrix compression to further increase the efficiency of our thermal solver and support larger scenes. In addition to our differentiable photon-tracing approach, we introduce a novel approximate edge sampling scheme that re-uses primary samples instead of relying on explicit edge samples or auxiliary rays to resolve visibility discontinuities. Our inverse simulation system enables designers to not only predict the temperature distribution, but also automatically optimize the design to improve thermal comfort and avoid problematic configurations. We showcase our approach using several examples in which we optimize the placement of buildings or their facade geometry. Our approach can be used to optimize arbitrary geometric parameterizations and supports steady-state, as well as transient simulations.
  • Item
    Linearly Transformed Spherical Distributions for Interactive Single Scattering with Area Lights
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Kt, Aakash; Shah, Ishaan; Narayanan, P. J.; Bousseau, Adrien; Day, Angela
    Single scattering in scenes with participating media is challenging, especially in the presence of area lights. Considerable variance still remains, in spite of good importance sampling strategies. Analytic methods that render unshadowed surface illumination have recently gained interest since they achieve biased but noise-free plausible renderings while being computationally efficient. In this work, we extend the theory of Linearly Transformed Spherical Distributions (LTSDs) which is a well-known analytic method for surface illumination, to work with phase functions. We show that this is non-trivial, and arrive at a solution with in-depth analysis. This enables us to analytically compute in-scattered radiance, which we build on to semi-analytically render unshadowed single scattering. We ground our derivations and formulations on the Volume Rendering Equation (VRE) which paves the way for realistic renderings despite the biased nature of our method. We also formulate ratio estimators for the VRE to work in conjunction with our formulation, enabling the rendering of shadows. We extensively validate our method, analyze its characteristics and demonstrate better performance compared to Monte Carlo single-scattering.
  • Item
    Neural Two-Level Monte Carlo Real-Time Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Dereviannykh, Mikhail; Klepikov, Dmitrii; Hanika, Johannes; Dachsbacher, Carsten; Bousseau, Adrien; Day, Angela
    We introduce an efficient Two-Level Monte Carlo (subset of Multi-Level Monte Carlo, MLMC) estimator for real-time rendering of scenes with global illumination. Using MLMC we split the shading integral into two parts: the radiance cache integral and the residual error integral that compensates for the bias of the first one. For the first part, we developed the Neural Incident Radiance Cache (NIRC) leveraging the power of tiny neural networks [MRNK21] as a building block, which is trained on the fly. The cache is designed to provide a fast and reasonable approximation of the incident radiance: an evaluation takes 2-25× less compute time than a path tracing sample. This enables us to estimate the radiance cache integral with a high number of samples and by this achieve faster convergence. For the residual error integral, we compute the difference between the NIRC predictions and the unbiased path tracing simulation. Our method makes no assumptions about the geometry, materials, or lighting of a scene and has only few intuitive hyper-parameters. We provide a comprehensive comparative analysis in different experimental scenarios. Since the algorithm is trained in an on-line fashion, it demonstrates significant noise level reduction even for dynamic scenes and can easily be combined with other noise reduction techniques.
  • Item
    Adaptive Multi-view Radiance Caching for Heterogeneous Participating Media
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Stadlbauer, Pascal; Tatzgern, Wolfgang; Mueller, Joerg H.; Winter, Martin; Stojanovic, Robert; Weinrauch, Alexander; Steinberger, Markus; Bousseau, Adrien; Day, Angela
    Achieving lifelike atmospheric effects, such as fog, is essential in creating immersive environments and poses a formidable challenge in real-time rendering. Highly realistic rendering of complex lighting interacting with dynamic fog can be very resourceintensive, due to light bouncing through a complex participating media multiple times. We propose an approach that uses a multi-layered spherical harmonics probe grid to share computations temporarily. In addition, this world-space storage enables the sharing of radiance data between multiple viewers. In the context of cloud rendering this means faster rendering and a significant enhancement in overall rendering quality with efficient resource utilization.
  • Item
    ASMR: Adaptive Skeleton-Mesh Rigging and Skinning via 2D Generative Prior
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Hong, Seokhyeon; Choi, Soojin; Kim, Chaelin; Cha, Sihun; Noh, Junyong; Bousseau, Adrien; Day, Angela
    Despite the growing accessibility of skeletal motion data, integrating it for animating character meshes remains challenging due to diverse configurations of both skeletons and meshes. Specifically, the body scale and bone lengths of the skeleton should be adjusted in accordance with the size and proportions of the mesh, ensuring that all joints are accurately positioned within the character mesh. Furthermore, defining skinning weights is complicated by variations in skeletal configurations, such as the number of joints and their hierarchy, as well as differences in mesh configurations, including their connectivity and shapes. While existing approaches have made efforts to automate this process, they hardly address the variations in both skeletal and mesh configurations. In this paper, we present a novel method for the automatic rigging and skinning of character meshes using skeletal motion data, accommodating arbitrary configurations of both meshes and skeletons. The proposed method predicts the optimal skeleton aligned with the size and proportion of the mesh as well as defines skinning weights for various meshskeleton configurations, without requiring explicit supervision tailored to each of them. By incorporating Diffusion 3D Features (Diff3F) as semantic descriptors of character meshes, our method achieves robust generalization across different configurations. To assess the performance of our method in comparison to existing approaches, we conducted comprehensive evaluations encompassing both quantitative and qualitative analyses, specifically examining the predicted skeletons, skinning weights, and deformation quality.
  • Item
    Image Vectorization via Gradient Reconstruction
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Chakraborty, Souymodip; Batra, Vineet; Phogat, Ankit; Jain, Vishwas; Ranawat, Jaswant Singh; Dhingra, Sumit; Wampler, Kevin; Fisher, Matthew; Lukác, Michal; Bousseau, Adrien; Day, Angela
    We present a fully automated technique that segments raster images into smooth shaded regions and reconstructs them using an optimal mix of solid fills, linear gradients, and radial gradients. Our method leverages a novel discontinuity-aware segmentation strategy and gradient reconstruction algorithm to accurately capture intricate shading details and produce compact Bézier curve representations. Extensive evaluations on both designer-created art and generative images demonstrate that our approach achieves high visual fidelity with minimal geometric complexity and fast processing times. This work offers a robust and versatile solution for converting detailed raster images into scalable vector graphics, addressing the evolving needs of modern design workflows.
  • Item
    Implicit UVs: Real-time Semi-global Parameterization of Implicit Surfaces
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Genest, Baptiste; Gueth, Pierre; Levallois, Jérémy; Wang, Stephanie; Bousseau, Adrien; Day, Angela
    Implicit representations of shapes are broadly used in computer graphics since they offer many valuable properties in design, modeling, and animation. However, their implicit and volumetric nature makes applying 2D textures fundamentally challenging. We propose a method to compute point-wise and parallelizable semi-global parameterizations of implicit surfaces for texturing, rendering, and modeling purposes. Our method not only defines local patches of parameterization, but also enables the merging of multiple adjacent patches into large and spatially coherent ones that conform to the geometry. Implemented in shaders into a sphere-tracing pipeline, our method allows users to edit the uv-fields with real-time visualization. We demonstrate how to add rendering details (texture, normal, displacement, etc.) using our parameterization, as well extending modeling tools with implicit shell maps. Furthermore, the textured objects remain implicit and can still be used in a modeling pipeline.
  • Item
    Lipschitz Pruning: Hierarchical Simplification of Primitive-Based SDFs
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Barbier, Wilhem; Sanchez, Mathieu; Paris, Axel; Michel, Élie; Lambert, Thibaud; Boubekeur, Tamy; Paulin, Mathias; Thonat, Theo; Bousseau, Adrien; Day, Angela
    Rendering tree-based analytical Signed Distance Fields (SDFs) through sphere tracing often requires to evaluate many primitives per tracing step, for many steps per pixel of the end image. This cost quickly becomes prohibitive as the number of primitives that constitute the SDF grows. In this paper, we alleviate this cost by computing local pruned trees that are equivalent to the full tree within their region of space while being much faster to evaluate. We introduce an efficient hierarchical tree pruning method based on the Lipschitz property of SDFs, which is compatible with hard and smooth CSG operators. We propose a GPU implementation that enables real-time sphere tracing of complex SDFs composed of thousands of primitives with dynamic animation. Our pruning technique provides significant speedups for SDF evaluation in general, which we demonstrate on sphere tracing tasks but could also lead to significant improvement for SDF discretization or polygonization.
  • Item
    Real-Time Rendering Framework for Holography
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Fricke, Sascha; Castillo, Susana; Eisemann, Martin; Magnor, Marcus; Bousseau, Adrien; Day, Angela
    With the advent of holographic near-eye displays, the need for rendering algorithms that output holograms instead of color images emerged. These holograms usually encode phase maps that alter the phase of coherent light sources such that images result from diffraction effects. While common approaches rely on translating the output of traditional rendering systems to holograms in a post processing step, we instead developed a rendering system that can directly output a phase map to a Spatial Light Modulator (SLM). Our hardware-ray-traced sparse point distribution, and depth mapping enable rapid hologram generation, allowing for highquality time-multiplexed holography for real-time content. Additionally, our system is compatible with foveated rendering which enables further performance optimizations.
  • Item
    Many-Light Rendering Using ReSTIR-Sampled Shadow Maps
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Zhang, Song; Lin, Daqi; Wyman, Chris; Yuksel, Cem; Bousseau, Adrien; Day, Angela
    We present a practical method targeting dynamic shadow maps for many light sources in real-time rendering. We compute fullresolution shadow maps for a subset of lights, which we select with spatiotemporal reservoir resampling (ReSTIR). Our selection strategy automatically regenerates shadow maps for lights with the strongest contributions to pixels in the current camera view. The remaining lights are handled using imperfect shadow maps, which provide low-resolution shadow approximation. We significantly reduce the computation and storage compared to using all full-resolution shadow maps and substantially improve shadow quality compared to handling all lights with imperfect shadow maps.
  • Item
    Generative Motion Infilling from Imprecisely Timed Keyframes
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Goel, Purvi; Zhang, Haotian; Liu, C. Karen; Fatahalian, Kayvon; Bousseau, Adrien; Day, Angela
    Keyframes are a standard representation for kinematic motion specification. Recent learned motion-inbetweening methods use keyframes as a way to control generative motion models, and are trained to generate life-like motion that matches the exact poses and timings of input keyframes. However, the quality of generated motion may degrade if the timing of these constraints is not perfectly consistent with the desired motion. Unfortunately, correctly specifying keyframe timings is a tedious and challenging task in practice. Our goal is to create a system that synthesizes high-quality motion from keyframes, even if keyframes are imprecisely timed. We present a method that allows constraints to be retimed as part of the generation process. Specifically, we introduce a novel model architecture that explicitly outputs a time-warping function to correct mistimed keyframes, and spatial residuals that add pose details. We demonstrate how our method can automatically turn approximately timed keyframe constraints into diverse, realistic motions with plausible timing and detailed submovements.
  • Item
    Learning Metric Fields for Fast Low-Distortion Mesh Parameterizations
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Fargion, Guy; Weber, Ofir; Bousseau, Adrien; Day, Angela
    We present a fast and robust method for computing an injective parameterization with low isometric distortion for disk-like triangular meshes. Harmonic function-based methods, with their rich mathematical foundation, are widely used. Harmonic maps are particularly valuable for ensuring injectivity under certain boundary conditions. In addition, they offer computational efficiency by forming a linear subspace [FW22]. However, this restricted subspace often leads to significant isometric distortion, especially for highly curved surfaces. Conversely, methods that operate in the full space of piecewise linear maps [SPSH∗17] achieve lower isometric distortion, but at a higher computational cost. Aigerman et al. [AGK∗22] pioneered a parameterization method that uses deep neural networks to predict the Jacobians of the map at mesh triangles, and integrates them into an explicit map by solving a Poisson equation. However, this approach often results in significant Poisson reconstruction errors due to the inability to ensure the integrability of the predicted neural Jacobian field, leading to unbounded distortion and lack of local injectivity. We propose a hybrid method that combines the speed and robustness of harmonic maps with the generality of deep neural networks to produce injective maps with low isometric distortion much faster than state-of-the-art methods. The core concept is simple but powerful. Instead of learning Jacobian fields, we learn metric tensor fields over the input mesh, resulting in a customized Laplacian matrix that defines a harmonic map in a modified metric [WGS23]. Our approach ensures injectivity, offers great computational efficiency, and produces significantly lower isometric distortion compared to straightforward harmonic maps.
  • Item
    SOBB: Skewed Oriented Bounding Boxes for Ray Tracing
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Kácerik, Martin; Bittner, Jirí; Bousseau, Adrien; Day, Angela
    We propose skewed oriented bounding boxes (SOBB) as a novel bounding primitive for accelerating the calculation of rayscene intersections. SOBBs have the same memory footprint as the well-known oriented bounding boxes (OBB) and can be used with a similar ray intersection algorithm. We propose an efficient algorithm for constructing a BVH with SOBBs, using a transformation from a standard BVH built for axis-aligned bounding boxes (AABB). We use discrete orientation polytopes as a temporary bounding representation to find tightly fitting SOBBs. Additionally, we propose a compression scheme for SOBBs that makes their memory requirements comparable to those of AABBs. For secondary rays, the SOBB BVH provides a ray tracing speedup of 1.0-11.0x over the AABB BVH and it is 1.1x faster than the OBB BVH on average. The transformation of AABB BVH to SOBB BVH is, on average, 2.6x faster than the ditetrahedron-based AABB BVH to OBB BVH transformation.
  • Item
    Towards Scaling-Invariant Projections for Data Visualization
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Dierkes, Joel; Stelter, Daniel; Rössl, Christian; Theisel, Holger; Bousseau, Adrien; Day, Angela
    Finding projections of multidimensional data domains to the 2D screen space is a well-known problem. Multidimensional data often comes with the property that the dimensions are measured in different physical units, which renders the ratio between dimensions, i.e., their scale, arbitrary. The result of common projections, like PCA, t-SNE, or MDS, depends on this ratio, i.e., these projections are variant to scaling. This results in an undesired subjective view of the data, and thus, their projection. Simple solutions like normalization of each dimension are widely used, but do not always give high-quality results. We propose to visually analyze the space of all scalings and to find optimal scalings w.r.t. the quality of the visualization. For this, we evaluate different quality criteria on scatter plots. Given a quality criterion, our approach finds scalings that yield good visualizations with little to no user input using numerical optimization. Simultaneously, our method results in a scaling invariant projection, proposing an objective view to the projected data. We show for several examples that such an optimal scaling can significantly improve the visualization quality.
  • Item
    View-Dependent Visibility Optimization for Monte Carlo Volume Visualization
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Lerzer, Nathan; Dachsbacher, Carsten; Bousseau, Adrien; Day, Angela
    Compared to classic ray marching-based approaches, Monte Carlo ray tracing for volume visualization can provide faster frame times through progressive rendering, improved image quality, and allows for advanced illumination models more easily. Techniques such as the view-dependent optimization of visibility and illumination of important regions, however, have been formulated for ray marching and rely on stepwise sampling along rays, and are thus incompatible with free-flight distance sampling of state-of-the-art Monte Carlo methods. In this paper we derive such a view-dependent optimization for Monte Carlo ray tracing where the visibility to the camera, the illumination and opacity of important regions is optimized for both single and multiple scattering rendering. For this we define a post-interpolative importance function, introduce an efficient data structure to sample, approximate and optimize the integrated extinction along rays, and devise an efficient Monte Carlo estimator for interactive visualization. Our method enables view-dependent visibility optimization with moderate memory overhead and unbiased, progressive Monte Carlo volume visualization. We demonstrate our method for various volume data sets as well as for data-dependent and spatially-dependent importance functions.
  • Item
    Shape-Conditioned Human Motion Diffusion Model with Mesh Representation
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Xue, Kebing; Seo, Hyewon; Bobenrieth, Cédric; Luo, Guoliang; Bousseau, Adrien; Day, Angela
    Human motion generation is a key task in computer graphics. While various conditioning signals such as text, action class, or audio have been used to harness the generation process, most existing methods neglect the case where a specific body is desired to perform the motion. Additionally, they rely on skeleton-based pose representations, necessitating additional steps to produce renderable meshes of the intended body shape. Given that human motion involves a complex interplay of bones, joints, and muscles, focusing solely on the skeleton during generation neglects the rich information carried by muscles and soft tissues, as well as their influence on movement, ultimately limiting the variability and precision of the generated motions. In this paper, we introduce Shape-conditioned Motion Diffusion model (SMD), which enables the generation of human motion directly in the form of a mesh sequence, conditioned on both a text prompt and a body mesh. To fully exploit the mesh representation while minimizing resource costs, we employ spectral representation using the graph Laplacian to encode body meshes into the learning process. Unlike retargeting methods, our model does not require source motion data and generates a variety of desired semantic motions that is inherently tailored to the given identity shape. Extensive experimental evaluations show that the SMD model not only maintains the body shape consistently with the conditioning input across motion frames but also achieves competitive performance in text-to-motion and action-to-motion tasks compared to state-of-the-art methods.
  • Item
    From Words to Wood: Text-to-Procedurally Generated Wood Materials
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Hafidi, Mohcen; Wilkie, Alexander; Bousseau, Adrien; Day, Angela
    In the domain of wood modeling, we present a new complex appearance model, coupled with a user-friendly NLP-based frontend for intuitive interactivity. First, we present a procedurally generated wood model that is capable of accurately simulating intricate wood characteristics, including growth rings, vessels/pores, rays, knots, and figure. Furthermore, newly developed features were introduced, including brushiness distortion, influence points, and individual feature control. These novel enhancements facilitate a more precise matching between procedurally generated wood and ground truth images. Second, we present a text-based user interface that relies on a trained natural language processing model that is designed to map user plain English requests into the parameter space of our procedurally generated wood model. This significantly reduces the complexity of the authoring process, thereby enabling any user, regardless of their level of woodworking expertise or familiarity with procedurally generated materials, to utilize it to its fullest potential.
  • Item
    BlendSim: Simulation on Parametric Blendshapes using Spacetime Projective Dynamics
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Wu, Yuhan; Umetani, Nobuyuki; Bousseau, Adrien; Day, Angela
    We propose BlendSim, a novel framework for editable simulation using spacetime optimization on the lightweight animation representation. Traditional spacetime control methods suffer from a high computational complexity, which limits their use in interactive animation. The proposed approach effectively reduces the dimensionality of the problem by representing the motion trajectories of each vertex using continuous parametric Bézier splines with variable keyframe times. Because this mesh animation representation is continuous and fully differentiable, it can be optimized such that it follows the laws of physics under various constraints. The proposed method also integrates constraints, such as collisions and cyclic motion, making it suitable for real-world applications where seamless looping and physical interactions are required. Leveraging projective dynamics, we further enhance the computational efficiency by decoupling the optimization into local parallelizable and global quadratic steps, enabling a fast and stable simulation. In addition, BlendSim is compatible with modern animation workflows and file formats, such as the glTF, making it practical way for authoring and transferring mesh animation.
  • Item
    Learning Fast 3D Gaussian Splatting Rendering using Continuous Level of Detail
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Milef, Nicholas; Seyb, Dario; Keeler, Todd; Nguyen-Phuoc, Thu; Bozic, Aljaz; Kondguli, Sushant; Marshall, Carl; Bousseau, Adrien; Day, Angela
    3D Gaussian splatting (3DGS) has shown potential for rendering photorealistic 3D scenes in real-time. Unfortunately, rendering these scenes on less powerful hardware is still a challenge, especially with high-resolution displays. We introduce a continuous level of detail (CLOD) algorithm and demonstrate how our method can improve performance while preserving as much quality as possible. Our approach learns to order splats based on importance and optimize them such that a representative and realistic scene can be rendered for an arbitrary splat count. Our method does not require any additional memory or rendering overhead and works with existing 3DGS renderers. We also demonstrate the flexibility of our CLOD method by extending it with distance-based LOD selection, foveated rendering, and budget-based rendering.