EG 2025 - Short Papers
Permanent URI for this collection
Browse
Browsing EG 2025 - Short Papers by Subject "based rendering"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Automated Skeleton Transformations on 3D Tree Models Captured from an RGB Video(The Eurographics Association, 2025) Michels, Joren; Moonen, Steven; GÜNEY, ENES; Temsamani, Abdellatif Bey; Michiels, Nick; Ceylan, Duygu; Li, Tzu-MaoA lot of work has been done surrounding the generation of realistically looking 3D models of trees. In most cases, L-systems are used to create variations of specific trees from a set of rules. While achieving good results, these techniques require knowledge of the structure of the tree to construct generative rules. We propose a system that can create variations of trees captured by a single RGB video. Using our method, plausible variations can be created without needing prior knowledge of the specific type of tree. This results in a fast and cost-efficient way to generate trees that resemble their real-life counterparts.Item Cardioid Caustics Generation with Conditional Diffusion Models(The Eurographics Association, 2025) Uss, Wojciech; Kaliński, Wojciech; Kuznetsov, Alexandr; Anand, Harish; Kim, Sungye; Ceylan, Duygu; Li, Tzu-MaoDespite the latest advances in generative neural techniques for producing photorealistic images, they lack generation of multi-bounce, high-frequency lighting effect like caustics. In this work, we tackle the problem of generating cardioid-shaped reflective caustics using diffusion-based generative models. We approach this problem as conditional image generation using a diffusion-based model conditioned with multiple images of geometric, material and illumination information as well as light property. We introduce a framework to fine-tune a pre-trained diffusion model and present results with visually plausible caustics.Item Light the Sprite: Pixel Art Dynamic Light Map Generation(The Eurographics Association, 2025) Nikolov, Ivan; Ceylan, Duygu; Li, Tzu-MaoCorrect lighting and shading are vital for pixel art design. Automating texture generation, such as normal, depth, and occlusion maps, has been a long-standing focus. We extend this by proposing a deep learning model that generates point and directional light maps from RGB pixel art sprites and specified light vectors. Our approach modifies a UNet architecture with CIN layers to incorporate positional and directional information, using ZoeDepth for training depth data. Testing on a popular pixel art dataset shows that the generated light maps closely match those from depth or normal maps, as well as from manual programs. The model effectively relights complex sprites across styles and functions in real time, enhancing artist workflows. The code and dataset are here - https://github.com/IvanNik17/light-sprite.