A Multimodal Personality Prediction Framework based on Adaptive Graph Transformer Network and Multi-task Learning

Loading...
Thumbnail Image
Date
2025
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association and John Wiley & Sons Ltd.
Abstract
Multimodal personality analysis targets accurately detecting personality traits by incorporating related multimodal information. However, existing methods focus on unimodal features while overlooking the bimodal association features crucial for this interdisciplinary task. Therefore, we propose a multimodal personality prediction framework based on an adaptive graph transformer network and multi-task learning. Firstly, we utilize pre-trained models to learn specific representations from different modalities. Here, we employ pre-trained multimodal models' encoders as the backbones of the modality-specific extraction methods to mine unimodal features. Specifically, we introduce a novel adaptive graph transformer network to mine personalityrelated bimodal association features. This network effectively learns higher-order temporal dependencies based on relational graphs and emphasizes more significant features. Furthermore, we utilize a multimodal channel attention residual fusion module to obtain the fused features, and we propose a multimodal and unimodal joint learning regression head to learn and predict scores for personality traits. We design a multi-task loss function to enhance the robustness and accuracy of personality prediction. Experimental results on the two benchmark datasets demonstrate the effectiveness of our framework, which outperforms the state-of-the-art methods. The code is available at https://github.com/RongquanWang/PPF-AGTNMTL.
Description

CCS Concepts: Imaging/Video → Image/Video Processing; Interaction → Multimodal/Cross-modal Interaction; Methods/Applications → Artificial Intelligence/Machine Learning

        
@article{
10.1111:cgf.70030
, journal = {Computer Graphics Forum}, title = {{
A Multimodal Personality Prediction Framework based on Adaptive Graph Transformer Network and Multi-task Learning
}}, author = {
Wang, Rongquan
and
Zhao, Xile
and
Xu, Xianyu
and
Hao, Yang
}, year = {
2025
}, publisher = {
The Eurographics Association and John Wiley & Sons Ltd.
}, ISSN = {
1467-8659
}, DOI = {
10.1111/cgf.70030
} }
Citation