EG2025
Permanent URI for this community
Browse
Browsing EG2025 by Subject "based rendering"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item Automated Skeleton Transformations on 3D Tree Models Captured from an RGB Video(The Eurographics Association, 2025) Michels, Joren; Moonen, Steven; GÜNEY, ENES; Temsamani, Abdellatif Bey; Michiels, Nick; Ceylan, Duygu; Li, Tzu-MaoA lot of work has been done surrounding the generation of realistically looking 3D models of trees. In most cases, L-systems are used to create variations of specific trees from a set of rules. While achieving good results, these techniques require knowledge of the structure of the tree to construct generative rules. We propose a system that can create variations of trees captured by a single RGB video. Using our method, plausible variations can be created without needing prior knowledge of the specific type of tree. This results in a fast and cost-efficient way to generate trees that resemble their real-life counterparts.Item Cardioid Caustics Generation with Conditional Diffusion Models(The Eurographics Association, 2025) Uss, Wojciech; Kaliński, Wojciech; Kuznetsov, Alexandr; Anand, Harish; Kim, Sungye; Ceylan, Duygu; Li, Tzu-MaoDespite the latest advances in generative neural techniques for producing photorealistic images, they lack generation of multi-bounce, high-frequency lighting effect like caustics. In this work, we tackle the problem of generating cardioid-shaped reflective caustics using diffusion-based generative models. We approach this problem as conditional image generation using a diffusion-based model conditioned with multiple images of geometric, material and illumination information as well as light property. We introduce a framework to fine-tune a pre-trained diffusion model and present results with visually plausible caustics.Item D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video(The Eurographics Association and John Wiley & Sons Ltd., 2025) Kappel, Moritz; Hahlbohm, Florian; Scholz, Timon; Castillo, Susana; Theobalt, Christian; Eisemann, Martin; Golyanik, Vladislav; Magnor, Marcus; Bousseau, Adrien; Day, AngelaDynamic reconstruction and spatiotemporal novel-view synthesis of non-rigidly deforming scenes recently gained increased attention. While existing work achieves impressive quality and performance on multi-view or teleporting camera setups, most methods fail to efficiently and faithfully recover motion and appearance from casual monocular captures. This paper contributes to the field by introducing a new method for dynamic novel view synthesis from monocular video, such as casual smartphone captures. Our approach represents the scene as a dynamic neural point cloud, an implicit time-conditioned point distribution that encodes local geometry and appearance in separate hash-encoded neural feature grids for static and dynamic regions. By sampling a discrete point cloud from our model, we can efficiently render high-quality novel views using a fast differentiable rasterizer and neural rendering network. Similar to recent work, we leverage advances in neural scene analysis by incorporating data-driven priors like monocular depth estimation and object segmentation to resolve motion and depth ambiguities originating from the monocular captures. In addition to guiding the optimization process, we show that these priors can be exploited to explicitly initialize our scene representation to drastically improve optimization speed and final image quality. As evidenced by our experimental evaluation, our dynamic point cloud model not only enables fast optimization and real-time frame rates for interactive applications, but also achieves competitive image quality on monocular benchmark sequences. Our code and data are available online https://moritzkappel.github.io/projects/dnpc/.Item Does 3D Gaussian Splatting Need Accurate Volumetric Rendering?(The Eurographics Association and John Wiley & Sons Ltd., 2025) Celarek, Adam; Kopanas, Georgios; Drettakis, George; Wimmer, Michael; Kerbl, Bernhard; Bousseau, Adrien; Day, AngelaSince its introduction, 3D Gaussian Splatting (3DGS) has become an important reference method for learning 3D representations of a captured scene, allowing real-time novel-view synthesis with high visual quality and fast training times. Neural Radiance Fields (NeRFs), which preceded 3DGS, are based on a principled ray-marching approach for volumetric rendering. In contrast, while sharing a similar image formation model with NeRF, 3DGS uses a hybrid rendering solution that builds on the strengths of volume rendering and primitive rasterization. A crucial benefit of 3DGS is its performance, achieved through a set of approximations, in many cases with respect to volumetric rendering theory. A naturally arising question is whether replacing these approximations with more principled volumetric rendering solutions can improve the quality of 3DGS. In this paper, we present an in-depth analysis of the various approximations and assumptions used by the original 3DGS solution. We demonstrate that, while more accurate volumetric rendering can help for low numbers of primitives, the power of efficient optimization and the large number of Gaussians allows 3DGS to outperform volumetric rendering despite its approximations.Item Light the Sprite: Pixel Art Dynamic Light Map Generation(The Eurographics Association, 2025) Nikolov, Ivan; Ceylan, Duygu; Li, Tzu-MaoCorrect lighting and shading are vital for pixel art design. Automating texture generation, such as normal, depth, and occlusion maps, has been a long-standing focus. We extend this by proposing a deep learning model that generates point and directional light maps from RGB pixel art sprites and specified light vectors. Our approach modifies a UNet architecture with CIN layers to incorporate positional and directional information, using ZoeDepth for training depth data. Testing on a popular pixel art dataset shows that the generated light maps closely match those from depth or normal maps, as well as from manual programs. The model effectively relights complex sprites across styles and functions in real time, enhancing artist workflows. The code and dataset are here - https://github.com/IvanNik17/light-sprite.Item Material Transforms from Disentangled NeRF Representations(The Eurographics Association and John Wiley & Sons Ltd., 2025) Lopes, Ivan; Lalonde, Jean-François; Charette, Raoul de; Bousseau, Adrien; Day, AngelaIn this paper, we first propose a novel method for transferring material transformations across different scenes. Building on disentangled Neural Radiance Field (NeRF) representations, our approach learns to map Bidirectional Reflectance Distribution Functions (BRDF) from pairs of scenes observed in varying conditions, such as dry and wet. The learned transformations can then be applied to unseen scenes with similar materials, therefore effectively rendering the transformation learned with an arbitrary level of intensity. Extensive experiments on synthetic scenes and real-world objects validate the effectiveness of our approach, showing that it can learn various transformations such as wetness, painting, coating, etc. Our results highlight not only the versatility of our method but also its potential for practical applications in computer graphics. We publish our method implementation, along with our synthetic/real datasets on https://github.com/astra-vision/BRDFTransformItem NoPe-NeRF++: Local-to-Global Optimization of NeRF with No Pose Prior(The Eurographics Association and John Wiley & Sons Ltd., 2025) Shi, Dongbo; Cao, Shen; Wu, Bojian; Guo, Jinhui; Fan, Lubin; Chen, Renjie; Liu, Ligang; Ye, Jieping; Bousseau, Adrien; Day, AngelaIn this paper, we introduce NoPe-NeRF++, a novel local-to-global optimization algorithm for training Neural Radiance Fields (NeRF) without requiring pose priors. Existing methods, particularly NoPe-NeRF, which focus solely on the local relationships within images, often struggle to recover accurate camera poses in complex scenarios. To overcome the challenges, our approach begins with a relative pose initialization with explicit feature matching, followed by a local joint optimization to enhance the pose estimation for training a more robust NeRF representation. This method significantly improves the quality of initial poses. Additionally, we introduce global optimization phase that incorporates geometric consistency constraints through bundle adjustment, which integrates feature trajectories to further refine poses and collectively boost the quality of NeRF. Notably, our method is the first work that seamlessly combines the local and global cues with NeRF, and outperforms state-of-the-art methods in both pose estimation accuracy and novel view synthesis. Extensive evaluations on benchmark datasets demonstrate our superior performance and robustness, even in challenging scenes, thus validating our design choices.Item VisibleUS: From Cryosectional Images to Real-Time Ultrasound Simulation(The Eurographics Association, 2025) Casanova-Salas, Pablo; Gimeno, Jesus; Blasco-Serra, Arantxa; González-Soler, Eva María; Escamilla-Muñoz, Laura; Valverde-Navarro, Alfonso Amador; Fernández, Marcos; Portalés, Cristina; Günther, Tobias; Montazeri, ZahraThe VisibleUS project aims to generate synthetic ultrasound images from cryosection images, focusing on the musculoskeletal system. Cryosection images provide a highly accurate representation of real tissue structures without artifacts. Using this rich anatomical data, we developed a ray-tracing-based simulation algorithm that models ultrasound wave propagation, scattering, and attenuation. This results in highly realistic ultrasound images that accurately depict fine anatomical details, such as muscle fibers and connective tissues. The simulation tool has various applications, including generating datasets for training neural networks and developing interactive training tools for ultrasound specialists. Its ability to produce realistic ultrasound images in real time enhances medical education and research, improving both the understanding and interpretation of ultrasound imaging.