Learning Body Shape and Pose from Dense Correspondences
dc.contributor.author | Yoshiyasu, Yusuke | en_US |
dc.contributor.author | Gamez, Lucas | en_US |
dc.contributor.editor | Wilkie, Alexander and Banterle, Francesco | en_US |
dc.date.accessioned | 2020-05-24T13:42:31Z | |
dc.date.available | 2020-05-24T13:42:31Z | |
dc.date.issued | 2020 | |
dc.description.abstract | In this paper, we address the problem of learning 3D human pose and body shape from 2D image dataset, without having to use 3D supervisions (body shape and pose) which are in practice difficult to obtain. The idea is to use dense correspondences between image points and a body surface, which can be annotated on in-the-wild 2D images, to extract, aggregate and learn 3D information such as body shape and pose from them. To do so, we propose a training strategy called "deform-and-learn" where we alternate deformable surface registration and training of deep convolutional neural networks (ConvNets). Experimental results showed that our method is comparable to previous semi-supervised techniques that use 3D supervision. | en_US |
dc.description.sectionheaders | Modelling - Shape | |
dc.description.seriesinformation | Eurographics 2020 - Short Papers | |
dc.identifier.doi | 10.2312/egs.20201012 | |
dc.identifier.isbn | 978-3-03868-101-4 | |
dc.identifier.issn | 1017-4656 | |
dc.identifier.pages | 37-40 | |
dc.identifier.uri | https://doi.org/10.2312/egs.20201012 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/egs20201012 | |
dc.publisher | The Eurographics Association | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | ] |
dc.title | Learning Body Shape and Pose from Dense Correspondences | en_US |