Browsing by Author "Duan, Ye"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Color3d: Photorealistic Texture Mapping for 3D Mesh(The Eurographics Association, 2023) Zhao, Chenxi; Fan, Chuanmao; Mohadikar, Payal; Duan, Ye; Chaine, Raphaƫlle; Deng, Zhigang; Kim, Min H.3D reconstruction plays a significant role in various fields, including medical imaging, architecture, and forensic science, in both research and industry. The quality of color is one of the criteria that determine reconstruction performance. However, the predicted color from deep learning often suffers from low quality and a lack of details. While traditional texture mapping methods can provide superior color, they are restricted by mesh quality. In this study, we propose Color3D, a comprehensive procedure that applies photorealistic colors to the reconstructed mesh, accommodating both static objects and animations. The necessary inputs include multiview RGB images, depth images, camera poses, and camera intrinsic. Compared to traditional methods, our approach replaces face colors directly from the texture map with vertex colors from multiview images. The colors of the faces are obtained by interpolating the vertex colors of each triangle. Our method can generate high-quality color for different objects, and the performance remains strong even when the input mesh is not perfect.Item Multi-scale Monocular Panorama Depth Estimation(The Eurographics Association, 2023) Mohadikar, Payal; Fan, Chuanmao; Zhao, Chenxi; Duan, Ye; Chaine, Raphaƫlle; Deng, Zhigang; Kim, Min H.Panorama images are widely used for scene depth estimation as they provide comprehensive scene representation. The existing deep-learning monocular panorama depth estimation networks produce inconsistent, discontinuous, and poor-quality depth maps. To overcome this, we propose a novel multi-scale monocular panorama depth estimation framework. We use a coarseto- fine depth estimation approach, where multi-scale tangent perspective images, projected from 360 images, are given to coarse and fine encoder-decoder networks to produce multi-scale perspective depth maps, that are merged to get low and high-resolution 360 depth maps. The coarse branch extracts holistic features that guide fine branch extracted features using a Multi-Scale Feature Fusion (MSFF) module at the network bottleneck. The performed experiments on the Stanford2D3D benchmark dataset show that our model outperforms the existing methods, producing consistent, smooth, structure-detailed, and accurate depth maps.