Browsing by Author "Lai, Yu-Kun"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item An Image-based Model for 3D Shape Quality Measure(The Eurographics Association, 2023) Alhamazani, Fahd; Rosin, Paul L.; Lai, Yu-Kun; Vangorp, Peter; Hunter, DavidIn light of increased research on 3D shapes and the increased processing capability of GPUs, there has been a significant increase in available 3D applications. In many applications, assessment of perceptual quality of 3D shapes is required. Due to the nature of 3D representation, this quality assessment may take various forms. While it is straightforward to measure geometric distortions directly on the 3D shape geometry, such measures are often inconsistent with human perception of quality. In most cases, human viewers tend to perceive 3D shapes from their 2D renderings. It is therefore plausible to measure shape quality using their 2D renderings. In this paper, we present an image-based quality metric for evaluating 3D shape quality given the original and distorted shapes. To provide a good coverage of 3D geometry from different views, we render each shape from 12 equally spaced views, along with a variety of rendering styles to capture different aspects of visual characteristics. Image-based metrics such as SSIM (Structure Similarity Index Measure) are then used to measure the quality of 3D shapes. Our experiments show that by effectively selecting a suitable combination of rendering styles and building a neural network based model, we achieve significantly better prediction for subjective perceptual quality than existing methods.Item RPS-Net: Indoor Scene Point Cloud Completion using RBF-Point Sparse Convolution(The Eurographics Association, 2023) Wang, Tao; Wu, Jing; Ji, Ze; Lai, Yu-Kun; Vangorp, Peter; Hunter, DavidWe introduce a novel approach to the completion of 3D scenes, which is a practically important task as captured point clouds of 3D scenes tend to be incomplete due to limited sensor range and occlusion. We address this problem by utilising sparse convolutions, commonly used for recognition tasks, to this content generation task, which can well capture the spatial relationships while ensuring high efficiency, as only samples near the surface need to be processed. Moreover, traditional sparse convolutions only consider grid occupancies, which cannot accurately locate surface points, with unavoidable quantisation errors. Observing that local surface patches have common patterns, we propose to sample a Radial Basis Function (RBF) field within each grid which is then compactly represented using a Point Encoder-Decoder (PED) network. This further provides a compact and effective representation for 3D completion, and the decoded latent feature includes important information of the local area of the point cloud for more accurate, sub-voxel level completion. Extensive experiments demonstrate that our method outperforms state-of-the-art methods by a large margin.