Implicit Shape Avatar Generalization across Pose and Identity

Abstract
The creation of realistic animated avatars has become a hot-topic in both academia and the creative industry. Recent advancements in deep learning and implicit representations have opened new research avenues, particularly in enhancing avatar details with lightweight models. This paper introduces an improvement over the state-of-the-art implicit Fast-SNARF method to permit generalization to novel motions and shape identities. Fast-SNARF trains two networks: an occupancy network to predict the shape of a character in canonical space, and a Linear Blend Skinning network to deform it into arbitrary poses. However, it requires a separated model for each subject. We extend this work by conditioning both networks on an identity parameter, enabling a single model to generalize across multiple identities, without increasing the model's size, compared to Fast-SNARF.
Description

CCS Concepts: Computing methodologies → Motion processing; Mesh models

        
@inproceedings{
10.2312:egs.20251049
, booktitle = {
Eurographics 2025 - Short Papers
}, editor = {
Ceylan, Duygu
and
Li, Tzu-Mao
}, title = {{
Implicit Shape Avatar Generalization across Pose and Identity
}}, author = {
Loranchet, Guillaume
and
Hellier, Pierre
and
Schnitzler, Francois
and
Boukhayma, Adnane
and
Regateiro, Joao
and
Multon, Franck
}, year = {
2025
}, publisher = {
The Eurographics Association
}, ISSN = {
1017-4656
}, ISBN = {
978-3-03868-268-4
}, DOI = {
10.2312/egs.20251049
} }
Citation