Representing Animatable Avatar via Factorized Neural Fields
dc.contributor.author | Song, Chunjin | en_US |
dc.contributor.author | Wu, Zhijie | en_US |
dc.contributor.author | Wandt, Bastian | en_US |
dc.contributor.author | Sigal, Leonid | en_US |
dc.contributor.author | Rhodin, Helge | en_US |
dc.contributor.editor | Attene, Marco | en_US |
dc.contributor.editor | Sellán, Silvia | en_US |
dc.date.accessioned | 2025-06-20T07:40:21Z | |
dc.date.available | 2025-06-20T07:40:21Z | |
dc.date.issued | 2025 | |
dc.description.abstract | For reconstructing high-fidelity human 3D models from monocular videos, it is crucial to maintain consistent large-scale body shapes along with finely matched subtle wrinkles. This paper explores how per-frame rendering results can be factorized into a pose-independent component and a corresponding pose-dependent counterpart to facilitate frame consistency at multiple scales. Pose adaptive texture features are further improved by restricting the frequency bands of these two components. Pose-independent outputs are expected to be low-frequency, while high-frequency information is linked to pose-dependent factors. We implement this with a dual-branch network. The first branch takes coordinates in the canonical space as input, while the second one additionally considers features outputted by the first branch and pose information of each frame. A final network integrates the information predicted by both branches and utilizes volume rendering to generate photo-realistic 3D human images. Through experiments, we demonstrate that our method consistently surpasses all state-of-the-art methods in preserving high-frequency details and ensuring consistent body contours. Our code is accessible at https://github.com/ChunjinSong/facavatar. | en_US |
dc.description.number | 5 | |
dc.description.sectionheaders | Animation and Morphing | |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.volume | 44 | |
dc.identifier.doi | 10.1111/cgf.70192 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.pages | 13 pages | |
dc.identifier.uri | https://doi.org/10.1111/cgf.70192 | |
dc.identifier.uri | https://diglib.eg.org/handle/10.1111/cgf70192 | |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | CCS Concepts: Computing methodologies → Reconstruction; Shape inference | |
dc.subject | Computing methodologies → Reconstruction | |
dc.subject | Shape inference | |
dc.title | Representing Animatable Avatar via Factorized Neural Fields | en_US |