A Unified Neural Network for Panoptic Segmentation
dc.contributor.author | Yao, Li | en_US |
dc.contributor.author | Chyau, Ang | en_US |
dc.contributor.editor | Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon | en_US |
dc.date.accessioned | 2019-10-14T05:08:57Z | |
dc.date.available | 2019-10-14T05:08:57Z | |
dc.date.issued | 2019 | |
dc.description.abstract | In this paper, we propose a unified neural network for panoptic segmentation, a task aiming to achieve more fine-grained segmentation. Following existing methods combining semantic and instance segmentation, our method relies on a triple-branch neural network for tackling the unifying work. In the first stage, we adopt a ResNet50 with a feature pyramid network (FPN) as shared backbone to extract features. Then each branch leverages the shared feature maps and serves as the stuff, things, or mask branch. Lastly, the outputs are fused following a well-designed strategy. Extensive experimental results on MS-COCO dataset demonstrate that our approach achieves a competitive Panoptic Quality (PQ) metric score with the state of the art. | en_US |
dc.description.number | 7 | |
dc.description.sectionheaders | Images and Learning | |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.volume | 38 | |
dc.identifier.doi | 10.1111/cgf.13852 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.pages | 461-468 | |
dc.identifier.uri | https://doi.org/10.1111/cgf.13852 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf13852 | |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.subject | Computing methodologies | |
dc.subject | Image segmentation | |
dc.subject | Neural networks | |
dc.title | A Unified Neural Network for Panoptic Segmentation | en_US |