InfoGCN: Representation Learning for Human Skeleton-Based Action Recognition
Human skeleton-based action recognition offers a valuable means to understand the intricacies of human behavior because it can handle the complex relationships between physical constraints and intention. Although several studies have focused on encoding a skeleton, less attention has been paid to embed this information into the latent representations of human action. InfoGCN proposes a learning framework for action recognition combining a novel learning objective and an encoding method. First, we design an information bottleneck-based learning objective to guide the model to learn informative but compact latent representations. To provide discriminative information for classifying action, we introduce attention-based graph convolution that captures the context-dependent intrinsic topology of human action. In addition, we present a multi-modal representation of the skeleton using the relative position of joints, designed to provide complementary spatial information for joints. InfoGCN surpasses the known state-of-the-art on multiple skeleton-based action recognition benchmarks with the accuracy of 93.0% on NTU RGB+D 60 cross-subject split, 89.8% on NTU RGB+D 120 cross-subject split, and 97.0% on NW-UCLA.
PDF AbstractCode
Datasets
Task | Dataset | Model | Metric Name | Metric Value | Global Rank | Benchmark |
---|---|---|---|---|---|---|
Skeleton Based Action Recognition | NTU RGB+D | InfoGCN | Accuracy (CV) | 97.1 | # 12 | |
Accuracy (CS) | 93.0 | # 11 | ||||
Ensembled Modalities | 6 | # 17 | ||||
Skeleton Based Action Recognition | NTU RGB+D 120 | InfoGCN | Accuracy (Cross-Subject) | 89.8 | # 9 | |
Accuracy (Cross-Setup) | 91.2 | # 9 | ||||
Ensembled Modalities | 6 | # 18 | ||||
Skeleton Based Action Recognition | N-UCLA | InfoGCN | Accuracy | 97.0 | # 7 |