Browsing by Author "Xu, Panpan"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Generating High-quality Superpixels in Textured Images(The Eurographics Association and John Wiley & Sons Ltd., 2020) Zhang, Zhe; Xu, Panpan; Chang, Jian; Wang, Wencheng; Zhao, Chong; Zhang, Jian Jun; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueSuperpixel segmentation is important for promoting various image processing tasks. However, existing methods still have difficulties in generating high-quality superpixels in textured images, because they cannot separate textures from structures well. Though texture filtering can be adopted for smoothing textures before superpixel segmentation, the filtering would also smooth the object boundaries, and thus weaken the quality of generated superpixels. In this paper, we propose to use the adaptive scale box smoothing instead of the texture filtering to obtain more high-quality texture and boundary information. Based on this, we design a novel distance metric to measure the distance between different pixels, which considers boundary, color and Euclidean distance simultaneously. As a result, our method can achieve high-quality superpixel segmentation in textured images without texture filtering. The experimental results demonstrate the superiority of our method over existing methods, even the learning-based methods. Benefited from using boundaries to guide superpixel segmentation, our method can also suppress noise to generate high-quality superpixels in non-textured images.Item Image Composition of Partially Occluded Objects(The Eurographics Association and John Wiley & Sons Ltd., 2019) Tan, Xuehan; Xu, Panpan; Guo, Shihui; Wang, Wencheng; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonImage composition extracts the content of interest (COI) from a source image and blends it into a target image to generate a new image. In the majority of existing works, the COI is manually extracted and then overlaid on top of the target image. However, in practice, it is often necessary to deal with situations in which the COI is partially occluded by the target image content. In this regard, both tasks of extracting the COI and cropping its occluded part require intensive user interactions, which are laborious and seriously reduce the composition efficiency. This paper addresses the aforementioned challenges by proposing an efficient image composition method. First, we extract the semantic contents of the images by using state-of-the-art deep learning methods. Therefore, the COI can be selected with clicks only, which can greatly reduce the demanded user interactions. Second, according to the user's operations (such as translation or scale) on the COI, we can effectively infer the occlusion relationships between the COI and the contents of the target image. Thus, the COI can be adaptively embedded into the target image without concern about cropping its occluded part. Therefore, the procedures of content extraction and occlusion handling can be significantly simplified, and work efficiency is remarkably improved. Experimental results show that compared to existing works, our method can reduce the number of user interactions to approximately one-tenth and increase the speed of image composition by more than ten times.Item Intrinsic Symmetry Detection on 3D Models with Skeleton-guided Combination of Extrinsic Symmetries(The Eurographics Association and John Wiley & Sons Ltd., 2019) Wang, Wencheng; Ma, Junhui; Xu, Panpan; Chu, Yiyao; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonThe existing methods for intrinsic symmetry detection on 3D models always need complex measures such as geodesic distances for describing intrinsic geometry and statistical computation for finding non-rigid transformations to associate symmetrical shapes. They are expensive, may miss symmetries, and cannot guarantee their obtained symmetrical parts in high quality. We observe that only extrinsic symmetries exist between convex shapes, and two intrinsically symmetric shapes can be determined if their belonged convex sub-shapes are symmetrical to each other correspondingly and connected in a similar topological structure. Thus, we propose to decompose the model into convex parts, and use the similar structures of the skeleton of the model to guide combination of extrinsic symmetries between convex parts for intrinsic symmetry detection. In this way, we give up statistical computation for intrinsic symmetry detection, and avoid complex measures for describing intrinsic geometry. With the similar structures being from small to large gradually, we can quickly detect multi-scale partial intrinsic symmetries in a bottom up manner. Benefited from the well segmented convex parts, our obtained symmetrical parts are in high quality. Experimental results show that our method can find many more symmetries and runs much faster than the existing methods, even by several orders of magnitude.