EG 2025 - Short Papers

Permanent URI for this collection

Short Paper 1
Smaller than Pixels: Rendering Millions of Stars in Real-Time
Simon Schneegans, Adrian Kreskowski, and Andreas Gerndt
Cardioid Caustics Generation with Conditional Diffusion Models
Wojciech Uss, Wojciech Kaliński, Alexandr Kuznetsov, Harish Anand, and Sungye Kim
Approximate and Exact Buoyancy Calculation for Real-time Floating Simulation of Meshes
Gábor Fábián
Light the Sprite: Pixel Art Dynamic Light Map Generation
Ivan Nikolov
Importance Sampling of BCSDF Derivatives
Lei Wang and Kei Iwasaki
Short Paper 2
Personalized Visual Dubbing through Virtual Dubber and Full Head Reenactment
Bobae Jeon, Eric Paquette, Sudhir Mudur, and Tiberiu Popa
Single-Shot Facial Appearance Acquisition without Statistical Appearance Priors
Guan Yu Soh and Abhijeet Ghosh
Neural Facial Deformation Transfer
Prashanth Chandran, Loïc Ciccone, Gaspard Zoss, and Derek Bradley
Pixels2Points: Fusing 2D and 3D Features for Facial Skin Segmentation
Victoria Yue Chen, Daoye Wang, Stephan Garbin, Jan Bednarik, Sebastian Winberg, Timo Bolkart, and Thabo Beeler
Two-shot Shape and SVBRDF Reconstruction of Human Faces with Albedo-Conditioned Diffusion
Chongrui Fan, Yiming Lin, Arvin Lin, and Abhijeet Ghosh
Short Paper 3
TemPCC: Completing Temporal Occlusions in Large Dynamic Point Clouds captured by Multiple RGB-D Cameras
Andre Mühlenbrock, Rene Weller, and Gabriel Zachmann
3D Gabor Splatting: Reconstruction of High-frequnecy Surface Texture using Gabor Noise
Haato Watanabe, Kenji Tojo, and Nobuyuki Umetani
Real-time Neural Rendering of LiDAR Point Clouds
Joni Vanherck, Brent Zoomers, Tom Mertens, Lode Jorissen, and Nick Michiels
NoiseGS: Boosting 3D Gaussian Splatting with Positional Noise for Large-Scale Scene Rendering
Minseong Kweon, Kai Cheng, Xuejin Chen, and Jinsun Park
Automated Skeleton Transformations on 3D Tree Models Captured from an RGB Video
Joren Michels, Steven Moonen, Enes Güney, Abdellatif Bey Temsamani, and Nick Michiels
Short Paper 4
Controlled Image Variability via Diffusion Processes
Yueze Zhu and Niloy J. Mitra
Audio-aided Character Control for Inertial Measurement Tracking
Hojun Jang, Jinseok Bae, and Young Min Kim
LabanLab: An Interactive Choreographical System with Labanotation-Motion Preview
Zhe Yan, Borou Yu, and Zeyu Wang
3D Garments: Reconstructing Topologically Correct Geometry and High-Quality Texture from Two Garment Images
Lisa Heße and Sunil Yadav
Lightweight Morphology-Aware Encoding for Motion Learning
Ziyu Wu, Thomas Michel, and Damien Rohmer
Implicit Shape Avatar Generalization across Pose and Identity
Guillaume Loranchet, Pierre Hellier, Francois Schnitzler, Adnane Boukhayma, Joao Regateiro, and Franck Multon
Short Paper 5
Parallel Dense-Geometry-Format Topology Decompression
Quirin Meyer, Joshua Barczak, Sander Reitter, and Carsten Benthin
Multi-Objective Packing of 3D Objects into Arbitrary Containers
Hermann Meißenhelter, Rene Weller, and Gabriel Zachmann
Double QuickCurve: revisiting 3-axis non-planar 3D printing
Emilio Ottonello, Pierre-Alexandre Hugron, Alberto Parmiggiani, and Sylvain Lefebvre
PartFull: A Hybrid Method for Part-Aware 3D Object Reconstruction from Sparse Views
Grekou Yao, Sébastien Mavromatis, and Jean-Luc Mari
Non-linear, Team-based VR Training for Cardiac Arrest Care with enhanced CRM Toolkit
Mike Kentros, Manos Kamarianakis, Michael Cole, Vitaliy Popov, Antonis Protopsaltis, and George Papagiannakis

BibTeX (EG 2025 - Short Papers)
@inproceedings{
10.2312:egs.20251029,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
Smaller than Pixels: Rendering Millions of Stars in Real-Time}},
author = {
Schneegans, Simon
and
Kreskowski, Adrian
and
Gerndt, Andreas
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251029}
}
@inproceedings{
10.2312:egs.20251030,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
Cardioid Caustics Generation with Conditional Diffusion Models}},
author = {
Uss, Wojciech
and
Kaliński, Wojciech
and
Kuznetsov, Alexandr
and
Anand, Harish
and
Kim, Sungye
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251030}
}
@inproceedings{
10.2312:egs.20251031,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
Approximate and Exact Buoyancy Calculation for Real-time Floating Simulation of Meshes}},
author = {
Fábián, Gábor
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251031}
}
@inproceedings{
10.2312:egs.20251032,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
Light the Sprite: Pixel Art Dynamic Light Map Generation}},
author = {
Nikolov, Ivan
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251032}
}
@inproceedings{
10.2312:egs.20251033,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
Importance Sampling of BCSDF Derivatives}},
author = {
Wang, Lei
and
Iwasaki, Kei
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251033}
}
@inproceedings{
10.2312:egs.20251034,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
Personalized Visual Dubbing through Virtual Dubber and Full Head Reenactment}},
author = {
Jeon, Bobae
and
Paquette, Eric
and
Mudur, Sudhir
and
Popa, Tiberiu
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251034}
}
@inproceedings{
10.2312:egs.20251035,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
Single-Shot Facial Appearance Acquisition without Statistical Appearance Priors}},
author = {
Soh, Guan Yu
and
Ghosh, Abhijeet
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251035}
}
@inproceedings{
10.2312:egs.20251036,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
Neural Facial Deformation Transfer}},
author = {
Chandran, Prashanth
and
Ciccone, Loïc
and
Zoss, Gaspard
and
Bradley, Derek
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251036}
}
@inproceedings{
10.2312:egs.20251037,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
Pixels2Points: Fusing 2D and 3D Features for Facial Skin Segmentation}},
author = {
Chen, Victoria Yue
and
Wang, Daoye
and
Garbin, Stephan
and
Bednarik, Jan
and
Winberg, Sebastian
and
Bolkart, Timo
and
Beeler, Thabo
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251037}
}
@inproceedings{
10.2312:egs.20251038,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
Two-shot Shape and SVBRDF Reconstruction of Human Faces with Albedo-Conditioned Diffusion}},
author = {
Fan, Chongrui
and
Lin, Yiming
and
Lin, Arvin
and
Ghosh, Abhijeet
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251038}
}
@inproceedings{
10.2312:egs.20251039,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
TemPCC: Completing Temporal Occlusions in Large Dynamic Point Clouds captured by Multiple RGB-D Cameras}},
author = {
Mühlenbrock, Andre
and
Weller, Rene
and
Zachmann, Gabriel
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251039}
}
@inproceedings{
10.2312:egs.20251040,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
3D Gabor Splatting: Reconstruction of High-frequnecy Surface Texture using Gabor Noise}},
author = {
Watanabe, Haato
and
Tojo, Kenji
and
Umetani, Nobuyuki
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251040}
}
@inproceedings{
10.2312:egs.20251041,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
Real-time Neural Rendering of LiDAR Point Clouds}},
author = {
VANHERCK, Joni
and
Zoomers, Brent
and
Mertens, Tom
and
Jorissen, Lode
and
Michiels, Nick
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251041}
}
@inproceedings{
10.2312:egs.20251042,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
NoiseGS: Boosting 3D Gaussian Splatting with Positional Noise for Large-Scale Scene Rendering}},
author = {
Kweon, Minseong
and
Cheng, Kai
and
Chen, Xuejin
and
Park, Jinsun
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251042}
}
@inproceedings{
10.2312:egs.20251043,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
Automated Skeleton Transformations on 3D Tree Models Captured from an RGB Video}},
author = {
Michels, Joren
and
Moonen, Steven
and
GÜNEY, ENES
and
Temsamani, Abdellatif Bey
and
Michiels, Nick
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251043}
}
@inproceedings{
10.2312:egs.20251044,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
Controlled Image Variability via Diffusion Processes}},
author = {
Zhu, Yueze
and
Mitra, Niloy J.
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251044}
}
@inproceedings{
10.2312:egs.20251045,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
Audio-aided Character Control for Inertial Measurement Tracking}},
author = {
Jang, Hojun
and
Bae, Jinseok
and
Kim, Young Min
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251045}
}
@inproceedings{
10.2312:egs.20251046,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
LabanLab: An Interactive Choreographical System with Labanotation-Motion Preview}},
author = {
Yan, Zhe
and
Yu, Borou
and
Wang, Zeyu
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251046}
}
@inproceedings{
10.2312:egs.20251047,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
3D Garments: Reconstructing Topologically Correct Geometry and High-Quality Texture from Two Garment Images}},
author = {
Heße, Lisa
and
Yadav, Sunil
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251047}
}
@inproceedings{
10.2312:egs.20251048,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
Lightweight Morphology-Aware Encoding for Motion Learning}},
author = {
Wu, Ziyu
and
Michel, Thomas
and
Rohmer, Damien
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251048}
}
@inproceedings{
10.2312:egs.20251049,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
Implicit Shape Avatar Generalization across Pose and Identity}},
author = {
Loranchet, Guillaume
and
Hellier, Pierre
and
Schnitzler, Francois
and
Boukhayma, Adnane
and
Regateiro, Joao
and
Multon, Franck
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251049}
}
@inproceedings{
10.2312:egs.20251050,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
Parallel Dense-Geometry-Format Topology Decompression}},
author = {
Meyer, Quirin
and
Barczak, Joshua
and
Reitter, Sander
and
Benthin, Carsten
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251050}
}
@inproceedings{
10.2312:egs.20251051,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
Multi-Objective Packing of 3D Objects into Arbitrary Containers}},
author = {
Meißenhelter, Hermann
and
Weller, Rene
and
Zachmann, Gabriel
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251051}
}
@inproceedings{
10.2312:egs.20251052,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
Double QuickCurve: revisiting 3-axis non-planar 3D printing}},
author = {
Ottonello, Emilio
and
Hugron, Pierre-Alexandre
and
Parmiggiani, Alberto
and
Lefebvre, Sylvain
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251052}
}
@inproceedings{
10.2312:egs.20251053,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
PartFull: A Hybrid Method for Part-Aware 3D Object Reconstruction from Sparse Views}},
author = {
Yao, Grekou
and
Mavromatis, Sébastien
and
Mari, Jean-Luc
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251053}
}
@inproceedings{
10.2312:egs.20251054,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
Non-linear, Team-based VR Training for Cardiac Arrest Care with enhanced CRM Toolkit}},
author = {
Kentros, Mike
and
Kamarianakis, Manos
and
Cole, Michael
and
Popov, Vitaliy
and
Protopsaltis, Antonis
and
Papagiannakis, George
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20251054}
}
@inproceedings{
10.2312:egs.20252002,
booktitle = {
Eurographics 2025 - Short Papers},
editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
EUROGRAPHICS 2025: Short Papers Frontmatter}},
author = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, year = {
2025},
publisher = {
Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-268-4},
DOI = {
10.2312/egs.20252002}
}

Browse

Recent Submissions

Now showing 1 - 27 of 27
  • Item
    Smaller than Pixels: Rendering Millions of Stars in Real-Time
    (The Eurographics Association, 2025) Schneegans, Simon; Kreskowski, Adrian; Gerndt, Andreas; Ceylan, Duygu; Li, Tzu-Mao
    Many applications need to display realistic stars. However, rendering stars with their correct luminance is surprisingly difficult: Usually, stars are so far away from the observer, that they appear smaller than a single pixel. As one can not visualize objects smaller than a pixel, one has to either distribute a star's luminance over an entire pixel or draw some kind of proxy geometry for the star. We also have to consider that pixels at the edge of the screen cover a smaller portion of the observer's field of view than pixels in the centre. Hence, single-pixel stars at the edge of the screen have to be drawn proportionally brighter than those in the centre. This is especially important for virtual-reality or dome renderings, where the field of view is large. In this paper, we compare different rendering techniques for stars and show how to compute their luminance based on the solid angle covered by their geometric proxies. This includes point-based stars, and various types of camera-aligned billboards. In addition, we present a software rasterizer which outperforms these classic rendering techniques in almost all cases. Furthermore, we show how a perception-based glare filter can be used to efficiently distribute a star's luminance to neighbouring pixels. Our implementation is part of the open-source space-visualization software CosmoScout VR.
  • Item
    Cardioid Caustics Generation with Conditional Diffusion Models
    (The Eurographics Association, 2025) Uss, Wojciech; Kaliński, Wojciech; Kuznetsov, Alexandr; Anand, Harish; Kim, Sungye; Ceylan, Duygu; Li, Tzu-Mao
    Despite the latest advances in generative neural techniques for producing photorealistic images, they lack generation of multi-bounce, high-frequency lighting effect like caustics. In this work, we tackle the problem of generating cardioid-shaped reflective caustics using diffusion-based generative models. We approach this problem as conditional image generation using a diffusion-based model conditioned with multiple images of geometric, material and illumination information as well as light property. We introduce a framework to fine-tune a pre-trained diffusion model and present results with visually plausible caustics.
  • Item
    Approximate and Exact Buoyancy Calculation for Real-time Floating Simulation of Meshes
    (The Eurographics Association, 2025) Fábián, Gábor; Ceylan, Duygu; Li, Tzu-Mao
    In this paper, we present methods for simulating floatation of bodies represented by triangular meshes. The primary challenge in creating such a simulation is determining the buoyant force and its reference point. We propose 5 algorithms, 3 approximations and 2 exact methods, that enable the real-time calculation of buoyant forces. Each algorithm is based on rigorous physical and mathematical principles, performing calculations directly on the triangular mesh rather than its approximation. Finally, we test the accuracy and efficiency of these algorithms through simple examples.
  • Item
    Light the Sprite: Pixel Art Dynamic Light Map Generation
    (The Eurographics Association, 2025) Nikolov, Ivan; Ceylan, Duygu; Li, Tzu-Mao
    Correct lighting and shading are vital for pixel art design. Automating texture generation, such as normal, depth, and occlusion maps, has been a long-standing focus. We extend this by proposing a deep learning model that generates point and directional light maps from RGB pixel art sprites and specified light vectors. Our approach modifies a UNet architecture with CIN layers to incorporate positional and directional information, using ZoeDepth for training depth data. Testing on a popular pixel art dataset shows that the generated light maps closely match those from depth or normal maps, as well as from manual programs. The model effectively relights complex sprites across styles and functions in real time, enhancing artist workflows. The code and dataset are here - https://github.com/IvanNik17/light-sprite.
  • Item
    Importance Sampling of BCSDF Derivatives
    (The Eurographics Association, 2025) Wang, Lei; Iwasaki, Kei; Ceylan, Duygu; Li, Tzu-Mao
    Differentiable rendering requires the development of importance sampling for derivative functions with respect to the parameters. While importance sampling for Bidirectional Reflectance Distribution Function derivative has been proposed in recent years, no methods have been introduced for the derivatives of Bidirectional Curve Scattering Distribution Function (BCSDF). To bridge this gap, we propose an importance sampling method for the derivatives of the BCSDF using positivization [BXB∗24]. Our BCSDF derivative importance sampling method achieves up to 94% reduction in RMSE for eqaul-time rendering.
  • Item
    Personalized Visual Dubbing through Virtual Dubber and Full Head Reenactment
    (The Eurographics Association, 2025) Jeon, Bobae; Paquette, Eric; Mudur, Sudhir; Popa, Tiberiu; Ceylan, Duygu; Li, Tzu-Mao
    Visual dubbing aims to modify facial expressions to ''lip-sync'' a new audio track. While person-generic talking head generation methods achieve expressive lip synchronization across arbitrary identities, they usually lack person-specific details and fail to generate high-quality results. Conversely, person-specific methods require extensive training. Our method combines the strengths of both methods by incorporating a virtual dubber, a person-generic talking head, as an intermediate representation. We then employ an autoencoder-based person-specific identity swapping network to transfer the actor identity, enabling fullhead reenactment that includes hair, face, ears, and neck. This eliminates artifacts while ensuring temporal consistency. Our quantitative and qualitative evaluation demonstrate that our method achieves a superior balance between lip-sync accuracy and realistic facial reenactment.
  • Item
    Single-Shot Facial Appearance Acquisition without Statistical Appearance Priors
    (The Eurographics Association, 2025) Soh, Guan Yu; Ghosh, Abhijeet; Ceylan, Duygu; Li, Tzu-Mao
    Single-shot in-the-wild facial reflectance acquisition has been a long-standing challenge in the field of computer graphics and computer vision. Current state-of-the-art methods are typically learning-based methods, pre-trained on a dataset of facial reflectance data. However, due to the high cost and time-consuming nature of gathering these datasets, they are usually limited in the number of subjects covered and hence are prone to biases in the dataset. To this end, we propose a novel multi-stage guided optimization with differentiable rendering to tackle this problem, without the use of statistical facial appearance priors. This makes our method immune to these biases, and we demonstrate the advantage with qualitative and quantitative evaluations against current state-of-the-art methods.
  • Item
    Neural Facial Deformation Transfer
    (The Eurographics Association, 2025) Chandran, Prashanth; Ciccone, Loïc; Zoss, Gaspard; Bradley, Derek; Ceylan, Duygu; Li, Tzu-Mao
    We address the practical problem of generating facial blendshapes and reference animations for a new 3D character in production environments where blendshape expressions and reference animations are readily available on a pre-defined template character. We propose Neural Facial Deformation Transfer (NFDT); a data-driven approach to transfer facial expressions from such a template character to new target characters given only the target's neutral shape. To accomplish this, we first present a simple data generation strategy to automatically create a large training dataset consisting of pairs of template and target character shapes in the same expression. We then leverage this dataset through a decoder-only transformer that transfers facial expressions from the template character to a target character in high fidelity. Through quantitative evaluations and a user study, we demonstrate that NFDT surpasses the previous state-of-the-art in facial expression transfer. NFDT provides good results across varying mesh topologies, generalizes to humanoid creatures, and can save time and cost in facial animation workflows.
  • Item
    Pixels2Points: Fusing 2D and 3D Features for Facial Skin Segmentation
    (The Eurographics Association, 2025) Chen, Victoria Yue; Wang, Daoye; Garbin, Stephan; Bednarik, Jan; Winberg, Sebastian; Bolkart, Timo; Beeler, Thabo; Ceylan, Duygu; Li, Tzu-Mao
    Face registration deforms a template mesh to closely fit a 3D face scan, the quality of which commonly degrades in non-skin regions (e.g., hair, beard, accessories), because the optimized template-to-scan distance pulls the template mesh towards the noisy scan surface. Improving registration quality requires a clean separation of skin and non-skin regions on the scan mesh. Existing image-based (2D) or scan-based (3D) segmentation methods however perform poorly. Image-based segmentation outputs multi-view inconsistent masks, and they cannot account for scan inaccuracies or scan-image misalignment, while scan-based methods suffer from lower spatial resolution compared to images. In this work, we introduce a novel method that accurately separates skin from non-skin geometry on 3D human head scans. For this, our method extracts features from multi-view images using a frozen image foundation model and aggregates these features in 3D. These lifted 2D features are then fused with 3D geometric features extracted from the scan mesh, to then predict a segmentation mask directly on the scan mesh. We show that our segmentations improve the registration accuracy over pure 2D or 3D segmentation methods by 8.89% and 14.3%, respectively. Although trained only on synthetic data, our model generalizes well to real data.
  • Item
    Two-shot Shape and SVBRDF Reconstruction of Human Faces with Albedo-Conditioned Diffusion
    (The Eurographics Association, 2025) Fan, Chongrui; Lin, Yiming; Lin, Arvin; Ghosh, Abhijeet; Ceylan, Duygu; Li, Tzu-Mao
    Reconstructing 3D human heads with relightability has been a long-standing research problem. Most methods either require a complicated hardware setup for multiview capture or involve fitting a pre-learned morphable model, resulting in a loss of details. In our work, we present a two-step deep learning method that directly predicts the shape and SVBRDF of a subject's face given two images taken from each side of the face. We enhance SVBRDF prediction by first estimating the diffuse and specular albedo in image space, then generating texture maps in UV-space with a generative model. We also learn a 2D position map in UVspace for 3D geometry, eliminating the need for a morphable model. Contrary to single-image facial reconstruction methods, we obtain clear measurements on both sides of the face with two images. Our method outperforms state-of-the-art methods when rendering faces at extreme angles and provides texture maps that are directly usable in most rendering systems.
  • Item
    TemPCC: Completing Temporal Occlusions in Large Dynamic Point Clouds captured by Multiple RGB-D Cameras
    (The Eurographics Association, 2025) Mühlenbrock, Andre; Weller, Rene; Zachmann, Gabriel; Ceylan, Duygu; Li, Tzu-Mao
    We present TemPCC, an approach to complete temporal occlusions in large dynamic point clouds. Our method manages a point set over time, integrates new observations into this set, and predicts the motion of occluded points based on the flow of surrounding visible ones. Unlike existing methods, our approach efficiently handles arbitrarily large point sets with linear complexity, does not reconstruct a canonical representation, and considers only local features. Our tests, performed on an Nvidia GeForce RTX 4090, demonstrate that our approach can complete a frame with 30,000 points in under 30 ms, while, in general, being able to handle point sets exceeding 1,000,000 points. This scalability enables the mitigation of temporal occlusions across entire scenes captured by multi-RGB-D camera setups. Our initial results demonstrate that self-occlusions are effectively completed and successfully generalized to unknown scenes despite limited training data.
  • Item
    3D Gabor Splatting: Reconstruction of High-frequnecy Surface Texture using Gabor Noise
    (The Eurographics Association, 2025) Watanabe, Haato; Tojo, Kenji; Umetani, Nobuyuki; Ceylan, Duygu; Li, Tzu-Mao
    3D Gaussian splatting has experienced explosive popularity in the past few years in the field of novel view synthesis. The lightweight and differentiable representation of the radiance field using the Gaussian enables rapid and high-quality reconstruction and fast rendering. However, reconstructing objects with high-frequency surface textures (e.g., fine stripes) requires many skinny Gaussian kernels because each Gaussian represents only one color if viewed from one direction. Thus, reconstructing the stripes pattern, for example, requires Gaussians for at least the number of stripes. We present 3D Gabor splatting, which augments the Gaussian kernel to represent spatially high-frequency signals using Gabor noise. The Gabor kernel is a combination of a Gaussian term and spatially fluctuating wave functions, making it suitable for representing spatial high-frequency texture. We demonstrate that our 3D Gabor splatting can reconstruct various high-frequency textures on the objects.
  • Item
    Real-time Neural Rendering of LiDAR Point Clouds
    (The Eurographics Association, 2025) VANHERCK, Joni; Zoomers, Brent; Mertens, Tom; Jorissen, Lode; Michiels, Nick; Ceylan, Duygu; Li, Tzu-Mao
    Static LiDAR scanners produce accurate, dense, colored point clouds, but often contain obtrusive artifacts which makes them ill-suited for direct display. We propose an efficient method to render more perceptually realistic images of such scans without any expensive preprocessing or training of a scene-specific model. A naive projection of the point cloud to the output view using 1×1 pixels is fast and retains the available detail, but also results in unintelligible renderings as background points leak between the foreground pixels. The key insight is that these projections can be transformed into a more realistic result using a deep convolutional model in the form of a U-Net, and a depth-based heuristic that prefilters the data. The U-Net also handles LiDAR-specific problems such as missing parts due to occlusion, color inconsistencies and varying point densities. We also describe a method to generate synthetic training data to deal with imperfectly-aligned ground truth images. Our method achieves real-time rendering rates using an off-the-shelf GPU and outperforms the state-of-the-art in both speed and quality.
  • Item
    NoiseGS: Boosting 3D Gaussian Splatting with Positional Noise for Large-Scale Scene Rendering
    (The Eurographics Association, 2025) Kweon, Minseong; Cheng, Kai; Chen, Xuejin; Park, Jinsun; Ceylan, Duygu; Li, Tzu-Mao
    3D Gaussian Splatting (3DGS) efficiently renders 3D spaces by adaptively densifying anisotropic Gaussians from initial points. However, in complex scenes such as city-scale environments, large Gaussians often overlap with high-frequency regions rich in edges and fine details. In these areas, conflicting per-pixel gradient directions cause gradient cancellation, reducing the overall gradient magnitude and potentially causing Gaussians to remain trapped in suboptimal positions even after densification. To address this, we propose NoiseGS, a novel approach that integrates randomized noise injection into 3DGS, guiding suboptimal Gaussians selected for densification toward more optimal positions. In addition, to mitigate the instability caused by oversized Gaussians, we introduce an ℓp-penalization on the scale of Gaussians. Our method integrates seamlessly with existing heuristicbased optimization and demonstrates strong generalization in reconstructing complex scenes such as MatrixCity and Building.
  • Item
    Automated Skeleton Transformations on 3D Tree Models Captured from an RGB Video
    (The Eurographics Association, 2025) Michels, Joren; Moonen, Steven; GÜNEY, ENES; Temsamani, Abdellatif Bey; Michiels, Nick; Ceylan, Duygu; Li, Tzu-Mao
    A lot of work has been done surrounding the generation of realistically looking 3D models of trees. In most cases, L-systems are used to create variations of specific trees from a set of rules. While achieving good results, these techniques require knowledge of the structure of the tree to construct generative rules. We propose a system that can create variations of trees captured by a single RGB video. Using our method, plausible variations can be created without needing prior knowledge of the specific type of tree. This results in a fast and cost-efficient way to generate trees that resemble their real-life counterparts.
  • Item
    Controlled Image Variability via Diffusion Processes
    (The Eurographics Association, 2025) Zhu, Yueze; Mitra, Niloy J.; Ceylan, Duygu; Li, Tzu-Mao
    Diffusion models have shown remarkable abilities in generating realistic images. Unfortunately, diffusion processes do not directly produce diverse samples. Recent work has addressed this problem by applying a joint-particle time-evolving potential force that encourages varied and distinct generations. However, such a method focuses on improving the diversity across any batch of generation rather than producing variations of a specific sample. In this paper, we propose a method for creating subtle variations of a single (generated) image - specifically, we propose Single Sample Refinement, a simple and training-free method to improve the diversity of one specific sample at different levels of variability. This mode is useful for creative content generation, allowing users to explore controlled variations without sacrificing the identity of the main objects.
  • Item
    Audio-aided Character Control for Inertial Measurement Tracking
    (The Eurographics Association, 2025) Jang, Hojun; Bae, Jinseok; Kim, Young Min; Ceylan, Duygu; Li, Tzu-Mao
    Physics-based character control generates realistic motion dynamics by leveraging kinematic priors from large-scale data within a simulation engine. The simulated motion respects physical plausibility, while dynamic cues like contacts and forces guide compelling human-scene interaction. However, leveraging audio cues, which can capture physical contacts in a costeffective way, has been less explored in animating human motions. In this work, we demonstrate that audio inputs can enhance accuracy in predicting footsteps and capturing human locomotion dynamics. Experiments validate that audio-aided control from sparse observations (e.g., an IMU sensor on a VR headset) enhances the prediction accuracy of contact dynamics and motion tracking, offering a practical auxiliary signal for robotics, gaming, and virtual environments.
  • Item
    LabanLab: An Interactive Choreographical System with Labanotation-Motion Preview
    (The Eurographics Association, 2025) Yan, Zhe; Yu, Borou; Wang, Zeyu; Ceylan, Duygu; Li, Tzu-Mao
    This paper introduces LabanLab, a novel choreography system that facilitates the creation of dance notation with motion preview. LabanLab features an interactive interface for creating Labanotation staff coupled with visualization of corresponding movements. Leveraging large language models (LLMs) and text-to-motion frameworks, LabanLab translates symbolic notation into natural language descriptions to generate lifelike character animations. As the first web-based Labanotation editor with motion synthesis capabilities, LabanLab makes Labanotation an input modality for multitrack human motion generation, empowering choreographers with practical tools and inviting novices to explore dance notation interactively.
  • Item
    3D Garments: Reconstructing Topologically Correct Geometry and High-Quality Texture from Two Garment Images
    (The Eurographics Association, 2025) Heße, Lisa; Yadav, Sunil; Ceylan, Duygu; Li, Tzu-Mao
    We present a fully integrated pipeline for generating topologically correct 3D meshes and high-fidelity textures of fashion garments. Our geometry reconstruction module takes two input images and employs a semi-signed distance field representation with shifted generalized winding numbers in a deep-learning framework to produce accurate, non-watertight meshes. To create realistic, high-resolution textures (up to 4K) that closely match the input, we combine diffusion-based inpainting with a differentiable renderer, further enhancing the quality through normal-guided projection to minimize projection distortions in the texture image. Our results demonstrate both precise geometry and richly detailed textures. In addition, we are making a portion of our high-quality training dataset publicly available, consisting of 250 lower-garment triangulated meshes with 4K textures.
  • Item
    Lightweight Morphology-Aware Encoding for Motion Learning
    (The Eurographics Association, 2025) Wu, Ziyu; Michel, Thomas; Rohmer, Damien; Ceylan, Duygu; Li, Tzu-Mao
    We present a lightweight method for encoding, learning, and predicting 3D rigged character motion sequences that consider both the character's pose and morphology. Specifically, we introduce an enhanced skeletal embedding that extends the standard skeletal representation by incorporating the radius of proxy cylinders, which conveys geometric information about the character's morphology at each joint. This additional geometric data is represented using compact tokens designed to work seamlessly with transformer architectures. This simple yet effective representation demonstrated through three distinct tokenization strategies, maintains the efficiency of skeletal-based representations while enhancing the accuracy of motion sequence predictions across diverse morphologies. Notably, our method achieves these results despite being trained on a limited dataset, showcasing its potential for applications with scarce animation data.
  • Item
    Implicit Shape Avatar Generalization across Pose and Identity
    (The Eurographics Association, 2025) Loranchet, Guillaume; Hellier, Pierre; Schnitzler, Francois; Boukhayma, Adnane; Regateiro, Joao; Multon, Franck; Ceylan, Duygu; Li, Tzu-Mao
    The creation of realistic animated avatars has become a hot-topic in both academia and the creative industry. Recent advancements in deep learning and implicit representations have opened new research avenues, particularly in enhancing avatar details with lightweight models. This paper introduces an improvement over the state-of-the-art implicit Fast-SNARF method to permit generalization to novel motions and shape identities. Fast-SNARF trains two networks: an occupancy network to predict the shape of a character in canonical space, and a Linear Blend Skinning network to deform it into arbitrary poses. However, it requires a separated model for each subject. We extend this work by conditioning both networks on an identity parameter, enabling a single model to generalize across multiple identities, without increasing the model's size, compared to Fast-SNARF.
  • Item
    Parallel Dense-Geometry-Format Topology Decompression
    (The Eurographics Association, 2025) Meyer, Quirin; Barczak, Joshua; Reitter, Sander; Benthin, Carsten; Ceylan, Duygu; Li, Tzu-Mao
    Dense Geometry Format (DGF) [BBM24] is a hardware-friendly representation for compressed triangle meshes specifically designed to support GPU hardware ray tracing. It decomposes a mesh into meshlets, i.e., small meshes with up to 64 positions, triangles, primitive indices, and opacity values, in a 128-byte block. However, accessing a triangle requires a slow sequential decompression algorithm with O(T) steps, where T is the number of triangles in a DGF block. We propose a novel parallel algorithm with O(logT) steps for arbitrary T. For DGF, where T ≤ 64, we transform our algorithm to allow O(1) access. We believe that our algorithm is suitable for hardware implementations. With our algorithm, a custom intersection shader outperforms the existing serial decompression method. Further, our mesh shader implementation achieves competitive rasterization performance with the vertex pipeline. Finally, we show how our method may parallelize other topology decompression schemes.
  • Item
    Multi-Objective Packing of 3D Objects into Arbitrary Containers
    (The Eurographics Association, 2025) Meißenhelter, Hermann; Weller, Rene; Zachmann, Gabriel; Ceylan, Duygu; Li, Tzu-Mao
    Packing problems arise in numerous real-world applications and often take diverse forms. We focus on the relatively underexplored task of packing a set of arbitrary 3D objects-drawn from a predefined distribution-into a single arbitrary 3D container. We simultaneously optimize two potentially conflicting objectives: maximizing the packed volume and maintaining sufficient spacing among objects of the same type to prevent clustering. We present an algorithm to compute solutions to this challenging problem heuristically. Our approach is a flexible two-tier pipeline that computes and refines an initial arrangement. Our results confirm that this approach achieves dense packings across various objects and container shapes.
  • Item
    Double QuickCurve: revisiting 3-axis non-planar 3D printing
    (The Eurographics Association, 2025) Ottonello, Emilio; Hugron, Pierre-Alexandre; Parmiggiani, Alberto; Lefebvre, Sylvain; Ceylan, Duygu; Li, Tzu-Mao
    Additive manufacturing builds physical objects by accumulating layers of solidified material. This is typically done with planar layers. Fused filament printers however have the capability to extrude material along 3D curves, leading to the idea of depositing in a non-planar fashion. In this paper we introduce a novel algorithm for this purpose, targeting simplicity, robustness and efficiency. Our method interpolates curved slicing surfaces between two top and bottom slicing surfaces, optimized to align with the object curvatures. These slicing surfaces are intersected with the input model to extract non-planar layers and curved deposition trajectories. We further orient trajectories according to the object's curvatures, improving deposition.
  • Item
    PartFull: A Hybrid Method for Part-Aware 3D Object Reconstruction from Sparse Views
    (The Eurographics Association, 2025) Yao, Grekou; Mavromatis, Sébastien; Mari, Jean-Luc; Ceylan, Duygu; Li, Tzu-Mao
    Recent advancements in 3D object reconstruction have been significantly enhanced by generative models; however, challenges remain when detailed 3D shapes are reconstructed from limited, sparse views. Traditional methods often require multiple input views and known camera poses, whereas newer approaches that leverage diffusion models from single images encounter realworld data limitations. In response, we propose ''PartFull'', a novel framework for part-aware 3D reconstruction using a hybrid approach. ''PartFull'' generates realistic 3D models from sparse RGB images by combining implicit and explicit representations to optimize surface reconstruction. Starting with sketch-based 3D models from individual views, these models are fused into a coherent object. Our pipeline incorporates a pretrained latent space for part-aware implicit representations and a deformable grid for feature volume construction and surface optimization. PartFull's joint optimization of surface geometry, topology, and implicit part segmentation constitutes a new approach to addressing the challenges of 3D reconstruction from sparse views.
  • Item
    Non-linear, Team-based VR Training for Cardiac Arrest Care with enhanced CRM Toolkit
    (The Eurographics Association, 2025) Kentros, Mike; Kamarianakis, Manos; Cole, Michael; Popov, Vitaliy; Protopsaltis, Antonis; Papagiannakis, George; Ceylan, Duygu; Li, Tzu-Mao
    This paper introduces iREACT, a novel VR simulation addressing key limitations in traditional cardiac arrest (CA) training. Conventional methods struggle to replicate the dynamic nature of real CA events, hindering Crew Resource Management (CRM) skill development. iREACT provides a non-linear, collaborative environment where teams respond to changing patient states, mirroring real CA complexities. By capturing multi-modal data (user actions, cognitive load, visual gaze) and offering real-time and post-session feedback, iREACT enhances CRM assessment beyond traditional methods. A formative evaluation with medical experts underscores its usability and educational value, with potential applications in other high-stakes training scenarios to improve teamwork, communication, and decision-making.
  • Item
    EUROGRAPHICS 2025: Short Papers Frontmatter
    (Eurographics Association, 2025) Ceylan, Duygu; Li, Tzu-Mao; Ceylan, Duygu; Li, Tzu-Mao