Eurographics Digital Library

This is the DSpace 7 platform of the Eurographics Digital Library.
  • The contents of the Eurographics Digital Library Archive are freely accessible. Only access to the full-text documents of the journal Computer Graphics Forum (joint property of Wiley and Eurographics) is restricted to Eurographics members, people from institutions who have an Institutional Membership at Eurographics, or users of the TIB Hannover. On the item pages you will find so-called purchase links to the TIB Hannover.
  • As a Eurographics member, you can log in with your email address and password from https://services.eg.org. If you are part of an institutional member and you are on a computer with a Eurographics registered IP domain, you can proceed immediately.
  • From 2022, all new releases published by Eurographics will be licensed under Creative Commons. Publishing with Eurographics is Plan-S compliant. Please visit Eurographics Licensing and Open Access Policy for more details.
 

Recent Submissions

Item
SHREC'25 track: Retrieval and Segmentation of Multiple Relief Patterns
(The Eurographics Association, 2025) Paolini, Gabriele; Tortorici, Claudio; Berretti, Stefano; Guerrero, Paul; Pratikakis, Ioannis; Veltkamp, Remco
This SHREC 2025 track focuses on the recognition and segmentation of relief patterns embedded on the surface of a novel set of synthetically generated triangle meshes. Although the track garnered considerable interest, the problem remains open at the end of the track. In this report, we introduce a new 3D benchmark which was published to assess the performance of the most recent relief pattern recognition algorithms. We discuss the limitations of current techniques; the intrinsic challenges to face to address relief pattern analysis, and potential future research directions in this field.
Item
Coupling Self-Distillation with Test Time Augmentation for effective LiDAR-Based 3D Semantic Segmentation
(The Eurographics Association, 2025) Antonarakos, Dimitrios; Zamanakos, Georgios; Papadeas, Ilias; Pratikakis, Ioannis; Guerrero, Paul; Pratikakis, Ioannis; Veltkamp, Remco
Effective 3D perception is fundamental for spatial awareness and safe navigation in modern autonomous systems, with 3D semantic segmentation of LiDAR point clouds being a critical perception task. Recent progress in 2D vision highlights the potential of non-architectural training and inference strategies to further boost model performance. Inspired by consistency-based learning and self-distillation, this work employs such a training pipeline for robust 3D semantic segmentation in street scene understanding. Specifically, we incorporate a teacher-student knowledge self-distillation framework that integrates Test-Time Augmentation to enhance the quality of the soft labels generated by the teacher model during training and to improve inference performance. We present a comparative study on the effectiveness of the employed framework across both convolutional and attention-enhanced networks. Experimental results on the Street3D benchmark dataset demonstrate that the adopted training framework coupled with attention-enhanced networks compares favorably with the state-of-the-art for 3D semantic segmentation in the context of autonomous driving. Code is available at https://github.com/DUTH-VCG/Self_Distillation_with_TTA-main
Item
PhyDeformer: High-Quality Non-Rigid Garment Registration with Physics-Awareness
(The Eurographics Association, 2025) Yu, Boyang; Cordier, Frederic; Seo, Hyewon; Guerrero, Paul; Pratikakis, Ioannis; Veltkamp, Remco
Accurately registering 3D garment meshes to real-world image data is a fundamental yet challenging task in computer vision and graphics, with applications in virtual try-on systems, digital fashion, performance capture, and virtual content creation. This problem involves recovering detailed, non-rigid garment geometry from partial, noisy, and often ambiguous visual cues extracted from 2D or reconstructed 3D data. A key challenge lies in aligning garment templates with target shapes while preserving realistic fabric behavior and accommodating variations in body shape, garment fit, and pose. We present PhyDeformer, a new deformation method for high-quality garment mesh registration. It operates in two phases: In the first phase, a garment grading is performed to achieve a coarse 3D alignment between the mesh template and the target mesh, accounting for proportional scaling and fit (e.g. length, size). In the second phase, the graded mesh is refined to capture fine-grained geometric details of the 3D target through a localized optimization process, leveraging a Jacobian-based deformation framework. Both quantitative and qualitative evaluations on synthetic and real garment data demonstrate the effectiveness and robustness of our method in achieving accurate and visually plausible registrations. The code and base meshes generated and evaluated in this paper are available at https://github.com/MLMS-CG/PhyDeformer.
Item
Eurographics Workshop on 3D Object Retrieval - Short Papers: Frontmatter
(The Eurographics Association, 2025) Guerrero, Paul; Pratikakis, Ioannis; Veltkamp, Remco; Guerrero, Paul; Pratikakis, Ioannis; Veltkamp, Remco
Item
Reshadable Impostors with Level-of-Detail for Real-Time Distant Objects Rendering
(The Eurographics Association and John Wiley & Sons Ltd., 2025) Wu, Xiaoloong; Zeng, Zheng; Zhu, Junqiu; Wang, Lu; Wang, Beibei; Wilkie, Alexander
We propose a new image-based representation for real-time distant objects rendering: Reshadable Impostors with Level-of- Detail (RiLoD). By storing compact geometric and material information captured from a few reference views, RiLoD enables reliable forward mapping to generate target views under dynamic lighting and edited material attributes. In addition, it supports seamless transitions across different levels of detail. To support reshading and LoD simultaneously while maintaining a minimal memory footprint and bandwidth requirement, our key design is a compact yet efficient representation that encodes and compresses the necessary material and geometric information in each reference view. To further improve the visual fidelity, we use a reliable forward mapping technique combined with a hole-filling filtering strategy to ensure geometric completeness and shading consistency. We demonstrate the practicality of RiLoD by integrating it into a modern real-time renderer. RiLoD delivers fast performance across a variety of test scenes, supports smooth transitions between levels of detail as the camera moves closer or farther, and avoids the typical artifacts of impostor techniques that result from neglecting the underlying geometry.