RigidFusion: RGB-D Scene Reconstruction with Rigidly-moving Objects
dc.contributor.author | Wong, Yu-Shiang | en_US |
dc.contributor.author | Li, Changjian | en_US |
dc.contributor.author | Nießner, Matthias | en_US |
dc.contributor.author | Mitra, Niloy J. | en_US |
dc.contributor.editor | Mitra, Niloy and Viola, Ivan | en_US |
dc.date.accessioned | 2021-04-09T08:01:56Z | |
dc.date.available | 2021-04-09T08:01:56Z | |
dc.date.issued | 2021 | |
dc.description.abstract | Although surface reconstruction from depth data has made significant advances in the recent years, handling changing environments remains a major challenge. This is unsatisfactory, as humans regularly move objects in their environments. Existing solutions focus on a restricted set of objects (e.g., those detected by semantic classifiers) possibly with template meshes, assume static camera, or mark objects touched by humans as moving. We remove these assumptions by introducing RigidFusion. Our core idea is a novel asynchronous moving-object detection method, combined with a modified volumetric fusion. This is achieved by a model-to-frame TSDF decomposition leveraging free-space carving of tracked depth values of the current frame with respect to the background model during run-time. As output, we produce separate volumetric reconstructions for the background and each moving object in the scene, along with its trajectory over time. Our method does not rely on the object priors (e.g., semantic labels or pre-scanned meshes) and is insensitive to the motion residuals between objects and the camera. In comparison to state-of-the-art methods (e.g., Co-Fusion, MaskFusion), we handle significantly more challenging reconstruction scenarios involving moving camera and improve moving-object detection (26% on the miss-detection ratio), tracking (27% on MOTA), and reconstruction (3% on the reconstruction F1) on the synthetic dataset. Please refer the supplementary and the project website for the video demonstration (geometry.cs.ucl.ac.uk/projects/2021/rigidfusion). | en_US |
dc.description.number | 2 | |
dc.description.sectionheaders | Analyzing and Integrating RGB-D Images | |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.volume | 40 | |
dc.identifier.doi | 10.1111/cgf.142651 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.pages | 511-522 | |
dc.identifier.uri | https://doi.org/10.1111/cgf.142651 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf142651 | |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.subject | Computing methodologies | |
dc.subject | Reconstruction | |
dc.subject | Tracking | |
dc.subject | Video segmentation | |
dc.subject | Image segmentation | |
dc.title | RigidFusion: RGB-D Scene Reconstruction with Rigidly-moving Objects | en_US |
Files
Original bundle
1 - 3 of 3
Loading...
- Name:
- rigidfusion_supplementary.pdf
- Size:
- 2.3 MB
- Format:
- Adobe Portable Document Format