CKD-LQPOSE: Towards a Real-World Low-quality Cross-Task Distilled Pose Estimation Architecture

Loading...
Thumbnail Image
Date
2024
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association
Abstract
Although human pose estimation (HPE) methods have achieved promising results, they remain challenging in real-world lowquality (LQ) scenarios. Moreover, due to the general lack of modeling of LQ information in currently public HPE datasets, it is difficult to accurately evaluate the performance of the HPE methods in LQ scenarios. Hence, we propose a novel CKD-LQPose architecture, which is the first architecture fusing cross-task feature information in HPE that uses a cross-task distillation method to merge HPE information and well-quality (WQ) information. The CKD-LQPose architecture effectively enables adaptive feature learning from LQ images and improves their quality to enhance HPE performance. Additionally, we introduce the PatchWQ-Gan module to obtain WQ information and the refined transformer decoder (RTD) module to refine the features further. In the inference stage, CKD-LQPose removes the PatchWQ-Gan and RTD modules to reduce the computational burden. Furthermore, to accurately evaluate the HPE methods in LQ scenarios, we develop an RLQPose-DS test benchmark. Extensive experiments on RLQPose-DS, real-world images, and LQ versions of well-known datasets such as COCO, MPII, and CrowdPose demonstrate CKD-LQPose outperforms state-of-the-art approaches by a large margin, demonstrating its effectiveness in realworld LQ scenarios.
Description

CCS Concepts: Computing methodologies → Image processing; Interest point and salient region detections

        
@inproceedings{
10.2312:pg.20241321
, booktitle = {
Pacific Graphics Conference Papers and Posters
}, editor = {
Chen, Renjie
and
Ritschel, Tobias
and
Whiting, Emily
}, title = {{
CKD-LQPOSE: Towards a Real-World Low-quality Cross-Task Distilled Pose Estimation Architecture
}}, author = {
Liu, Tao
and
Yao, Beiji
and
Huang, Jun
and
Wang, Ya
}, year = {
2024
}, publisher = {
The Eurographics Association
}, ISBN = {
978-3-03868-250-9
}, DOI = {
10.2312/pg.20241321
} }
Citation