Unsupervised Learning of Disentangled 3D Representation from a Single Image
dc.contributor.author | Lv, Junliang | en_US |
dc.contributor.author | Jiang, Haiyong | en_US |
dc.contributor.author | Xiao, Jun | en_US |
dc.contributor.editor | Bittner, Jirà and Waldner, Manuela | en_US |
dc.date.accessioned | 2021-04-09T19:18:48Z | |
dc.date.available | 2021-04-09T19:18:48Z | |
dc.date.issued | 2021 | |
dc.description.abstract | Learning 3D representation of a single image is challenging considering the ambiguity, occlusion, and perspective project of an object in an image. Previous works either seek image annotation or 3D supervision to learn meaningful factors of an object or employ a StyleGAN-like framework for image synthesis. While the first ones rely on tedious annotation and even dense geometry ground truth, the second solutions usually cannot guarantee consistency of shapes between different view images. In this paper, we combine the advantages of both frameworks and propose an image disentanglement method based on 3D representation. Results show our method facilitates unsupervised 3D representation learning while preserving consistency between images. | en_US |
dc.description.sectionheaders | Posters | |
dc.description.seriesinformation | Eurographics 2021 - Posters | |
dc.identifier.doi | 10.2312/egp.20211030 | |
dc.identifier.isbn | 978-3-03868-134-2 | |
dc.identifier.issn | 1017-4656 | |
dc.identifier.pages | 11-12 | |
dc.identifier.uri | https://doi.org/10.2312/egp.20211030 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/egp20211030 | |
dc.publisher | The Eurographics Association | en_US |
dc.subject | Computing methodologies | |
dc.subject | Image representations | |
dc.subject | Reconstruction | |
dc.subject | Mesh models | |
dc.title | Unsupervised Learning of Disentangled 3D Representation from a Single Image | en_US |