Temporally Consistent Motion Segmentation From RGB‐D Video
dc.contributor.author | Bertholet, P. | en_US |
dc.contributor.author | Ichim, A.E. | en_US |
dc.contributor.author | Zwicker, M. | en_US |
dc.contributor.editor | Chen, Min and Benes, Bedrich | en_US |
dc.date.accessioned | 2018-08-29T06:56:01Z | |
dc.date.available | 2018-08-29T06:56:01Z | |
dc.date.issued | 2018 | |
dc.description.abstract | Temporally consistent motion segmentation from RGB‐D videos is challenging because of the limitations of current RGB‐D sensors. We formulate segmentation as a motion assignment problem, where a motion is a sequence of rigid transformations through all frames of the input. We capture the quality of each potential assignment by defining an appropriate energy function that accounts for occlusions and a sensor‐specific noise model. To make energy minimization tractable, we work with a discrete set instead of the continuous, high dimensional space of motions, where the discrete motion set provides an upper bound for the original energy. We repeatedly minimize our energy, and in each step extend and refine the motion set to further lower the bound. A quantitative comparison to the current state of the art demonstrates the benefits of our approach in difficult scenarios.Temporally consistent motion segmentation from RGB‐D videos is challenging because of the limitations of current RGB‐D sensors. We formulate segmentation as a motion assignment problem, where a motion is a sequence of rigid transformations through all frames of the input. We capture the quality of each potential assignment by defining an appropriate energy function that accounts for occlusions and a sensor‐specific noise model. To make energy minimization tractable, we work with a discrete set instead of the continuous, high dimensional space of motions, where the discrete motion set provides an upper bound for the original energy. We repeatedly minimize our energy, and in each step extend and refine the motion set to further lower the bound. A quantitative comparison to the current state of the art demonstrates the benefits of our approach in difficult scenarios. | en_US |
dc.description.number | 6 | |
dc.description.sectionheaders | Articles | |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.volume | 37 | |
dc.identifier.doi | 10.1111/cgf.13316 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.pages | 118-134 | |
dc.identifier.uri | https://doi.org/10.1111/cgf.13316 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf13316 | |
dc.publisher | © 2018 The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.subject | image and video processing | |
dc.subject | video segmentation | |
dc.subject | image and video processing | |
dc.subject | object scanning/acquisition | |
dc.subject | modelling | |
dc.subject | Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Picture/Image Generation—Line and curve generation | |
dc.title | Temporally Consistent Motion Segmentation From RGB‐D Video | en_US |