RGB-D to CAD Retrieval with ObjectNN Dataset
dc.contributor.author | Hua, Binh-Son | en_US |
dc.contributor.author | Truong, Quang-Trung | en_US |
dc.contributor.author | Tran, Minh-Khoi | en_US |
dc.contributor.author | Pham, Quang-Hieu | en_US |
dc.contributor.author | Kanezaki, Asako | en_US |
dc.contributor.author | Lee, Tang | en_US |
dc.contributor.author | Chiang, HungYueh | en_US |
dc.contributor.author | Hsu, Winston | en_US |
dc.contributor.author | Li, Bo | en_US |
dc.contributor.author | Lu, Yijuan | en_US |
dc.contributor.author | Johan, Henry | en_US |
dc.contributor.author | Tashiro, Shoki | en_US |
dc.contributor.author | Aono, Masaki | en_US |
dc.contributor.author | Tran, Minh-Triet | en_US |
dc.contributor.author | Pham, Viet-Khoi | en_US |
dc.contributor.author | Nguyen, Hai-Dang | en_US |
dc.contributor.author | Nguyen, Vinh-Tiep | en_US |
dc.contributor.author | Tran, Quang-Thang | en_US |
dc.contributor.author | Phan, Thuyen V. | en_US |
dc.contributor.author | Truong, Bao | en_US |
dc.contributor.author | Do, Minh N. | en_US |
dc.contributor.author | Duong, Anh-Duc | en_US |
dc.contributor.author | Yu, Lap-Fai | en_US |
dc.contributor.author | Nguyen, Duc Thanh | en_US |
dc.contributor.author | Yeung, Sai-Kit | en_US |
dc.contributor.editor | Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov | en_US |
dc.date.accessioned | 2017-04-22T17:17:40Z | |
dc.date.available | 2017-04-22T17:17:40Z | |
dc.date.issued | 2017 | |
dc.description.abstract | The goal of this track is to study and evaluate the performance of 3D object retrieval algorithms using RGB-D data. This is inspired from the practical need to pair an object acquired from a consumer-grade depth camera to CAD models available in public datasets on the Internet. To support the study, we propose ObjectNN, a new dataset with well segmented and annotated RGB-D objects from SceneNN [HPN 16] and CAD models from ShapeNet [CFG 15]. The evaluation results show that the RGB-D to CAD retrieval problem, while being challenging to solve due to partial and noisy 3D reconstruction, can be addressed to a good extent using deep learning techniques, particularly, convolutional neural networks trained by multi-view and 3D geometry. The best method in this track scores 82% in accuracy. | en_US |
dc.description.sectionheaders | SHREC Session I | |
dc.description.seriesinformation | Eurographics Workshop on 3D Object Retrieval | |
dc.identifier.doi | 10.2312/3dor.20171048 | |
dc.identifier.isbn | 978-3-03868-030-7 | |
dc.identifier.issn | 1997-0471 | |
dc.identifier.pages | 25-32 | |
dc.identifier.uri | https://doi.org/10.2312/3dor.20171048 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/3dor20171048 | |
dc.publisher | The Eurographics Association | en_US |
dc.subject | I.4.8 [Computer Vision] | |
dc.subject | Scene Analysis | |
dc.subject | Object Recognition | |
dc.title | RGB-D to CAD Retrieval with ObjectNN Dataset | en_US |
Files
Original bundle
1 - 1 of 1