Anisotropic Superpixel Generation Based on Mahalanobis Distance
dc.contributor.author | Cai, Yiqi | en_US |
dc.contributor.author | Guo, Xiaohu | en_US |
dc.contributor.editor | Eitan Grinspun and Bernd Bickel and Yoshinori Dobashi | en_US |
dc.date.accessioned | 2016-10-11T05:19:48Z | |
dc.date.available | 2016-10-11T05:19:48Z | |
dc.date.issued | 2016 | |
dc.description.abstract | Superpixels have been widely used as a preprocessing step in various computer vision tasks. Spatial compactness and color homogeneity are the two key factors determining the quality of the superpixel representation. In this paper, these two objectives are considered separately and anisotropic superpixels are generated to better adapt to local image content. We develop a unimodular Gaussian generative model to guide the color homogeneity within a superpixel by learning local pixel color variations. It turns out maximizing the log-likelihood of our generative model is equivalent to solving a Centroidal Voronoi Tessellation (CVT) problem. Moreover, we provide the theoretical guarantee that the CVT result is invariant to affine illumination change, which makes our anisotropic superpixel generation algorithm well suited for image/video analysis in varying illumination environment. The effectiveness of our method in image/video superpixel generation is demonstrated through the comparison with other state-of-the-art methods. | en_US |
dc.description.number | 7 | |
dc.description.sectionheaders | Image Processing | |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.volume | 35 | |
dc.identifier.doi | 10.1111/cgf.13017 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.pages | 199-207 | |
dc.identifier.uri | https://doi.org/10.1111/cgf.13017 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf13017 | |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.title | Anisotropic Superpixel Generation Based on Mahalanobis Distance | en_US |