Eurographics Conferences
Permanent URI for this community
Browse
Browsing Eurographics Conferences by Subject "3D reconstruction"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item From Few to Full: High-Resolution 3D Object Reconstruction from Sparse Views and Unknown Poses(The Eurographics Association, 2024) Yao, Grekou; Mavromatis, Sebastien; Mari, Jean-Luc; Liu, Lingjie; Averkiou, MelinosRecent progress in 3D reconstruction has been driven by generative models, moving from traditional multi-view dependence to single-image diffusion model based techniques. However, these innovative approaches often face challenges with sparse view scenarios, requiring known poses or template shapes, often failing in high-resolution reconstructions. Addressing these issues, we introduce the ''F2F'' (Few to Full) framework, designed for crafting high-resolution 3D models from few views and unknown camera poses, creating fully realistic 3D objects without external constraints. F2F employs a hybrid approach, optimizing both implicit and explicit representations through a unique pipeline involving a pretrained diffusion model for pose estimation, a deformable tetrahedra grid for feature volume construction, and an MLP (neural network) for surface optimization. Our method sets a new standard by ensuring surface geometry, topology, and semantic consistency through differentiable rendering, aiming for a comprehensive solution in 3D reconstruction from sparse views.Item PartFull: A Hybrid Method for Part-Aware 3D Object Reconstruction from Sparse Views(The Eurographics Association, 2025) Yao, Grekou; Mavromatis, Sébastien; Mari, Jean-Luc; Ceylan, Duygu; Li, Tzu-MaoRecent advancements in 3D object reconstruction have been significantly enhanced by generative models; however, challenges remain when detailed 3D shapes are reconstructed from limited, sparse views. Traditional methods often require multiple input views and known camera poses, whereas newer approaches that leverage diffusion models from single images encounter realworld data limitations. In response, we propose ''PartFull'', a novel framework for part-aware 3D reconstruction using a hybrid approach. ''PartFull'' generates realistic 3D models from sparse RGB images by combining implicit and explicit representations to optimize surface reconstruction. Starting with sketch-based 3D models from individual views, these models are fused into a coherent object. Our pipeline incorporates a pretrained latent space for part-aware implicit representations and a deformable grid for feature volume construction and surface optimization. PartFull's joint optimization of surface geometry, topology, and implicit part segmentation constitutes a new approach to addressing the challenges of 3D reconstruction from sparse views.