BlendPCR: Seamless and Efficient Rendering of Dynamic Point Clouds captured by Multiple RGB-D Cameras
dc.contributor.author | Mühlenbrock, Andre | en_US |
dc.contributor.author | Weller, Rene | en_US |
dc.contributor.author | Zachmann, Gabriel | en_US |
dc.contributor.editor | Hasegawa, Shoichi | en_US |
dc.contributor.editor | Sakata, Nobuchika | en_US |
dc.contributor.editor | Sundstedt, Veronica | en_US |
dc.date.accessioned | 2024-11-29T06:43:07Z | |
dc.date.available | 2024-11-29T06:43:07Z | |
dc.date.issued | 2024 | |
dc.description.abstract | Traditional techniques for rendering continuous surfaces from dynamic, noisy point clouds using multi-camera setups often suffer from disruptive artifacts in overlapping areas, similar to z-fighting. We introduce BlendPCR, an advanced rendering technique that effectively addresses these artifacts through a dual approach of point cloud processing and screen space blending. Additionally, we present a UV coordinate encoding scheme to enable high-resolution texture mapping via standard camera SDKs. We demonstrate that our approach offers superior visual rendering quality over traditional splat and mesh-based methods and exhibits no artifacts in those overlapping areas, which still occur in leading-edge NeRF and Gaussian Splat based approaches like Pointersect and P2ENet. In practical tests with seven Microsoft Azure Kinects, processing, including uploading the point clouds to GPU, requires only 13.8 ms (when using one color per point) or 29.2 ms (using high-resolution color textures), and rendering at a resolution of 3580 x 2066 takes just 3.2 ms, proving its suitability for real-time VR applications. | en_US |
dc.description.sectionheaders | Rendering and Sensing | |
dc.description.seriesinformation | ICAT-EGVE 2024 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments | |
dc.identifier.doi | 10.2312/egve.20241366 | |
dc.identifier.isbn | 978-3-03868-245-5 | |
dc.identifier.issn | 1727-530X | |
dc.identifier.pages | 10 pages | |
dc.identifier.uri | https://doi.org/10.2312/egve.20241366 | |
dc.identifier.uri | https://diglib.eg.org/handle/10.2312/egve20241366 | |
dc.publisher | The Eurographics Association | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | CCS Concepts: Computing methodologies → Rendering; Virtual reality; Point-based models; Mesh geometry models | |
dc.subject | Computing methodologies → Rendering | |
dc.subject | Virtual reality | |
dc.subject | Point | |
dc.subject | based models | |
dc.subject | Mesh geometry models | |
dc.title | BlendPCR: Seamless and Efficient Rendering of Dynamic Point Clouds captured by Multiple RGB-D Cameras | en_US |