Implicit Temporal Modeling with Learnable Alignment for Video Recognition

Contrastive language-image pretraining (CLIP) has demonstrated remarkable success in various image tasks. However, how to extend CLIP with effective temporal modeling is still an open and crucial problem. Existing factorized or joint spatial-temporal modeling trades off between the efficiency and performance. While modeling temporal information within straight through tube is widely adopted in literature, we find that simple frame alignment already provides enough essence without temporal attention. To this end, in this paper, we proposed a novel Implicit Learnable Alignment (ILA) method, which minimizes the temporal modeling effort while achieving incredibly high performance. Specifically, for a frame pair, an interactive point is predicted in each frame, serving as a mutual information rich region. By enhancing the features around the interactive point, two frames are implicitly aligned. The aligned features are then pooled into a single token, which is leveraged in the subsequent spatial self-attention. Our method allows eliminating the costly or insufficient temporal self-attention in video. Extensive experiments on benchmarks demonstrate the superiority and generality of our module. Particularly, the proposed ILA achieves a top-1 accuracy of 88.7% on Kinetics-400 with much fewer FLOPs compared with Swin-L and ViViT-H. Code is released at https://github.com/Francis-Rings/ILA .

PDF Abstract ICCV 2023 PDF ICCV 2023 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Action Classification Kinetics-400 ILA (ViT-L/14) Acc@1 88.7 # 16
Acc@5 97.8 # 12
Action Classification Kinetics-400 ILA (ViT-B/32) Acc@1 82.4 # 71
Acc@5 95.8 # 43
Action Classification Kinetics-400 ILA (ViT-B/16) Acc@1 85.7 # 46
Acc@5 97.2 # 29
Action Recognition Something-Something V2 ILA (ViT-B/16) Top-1 Accuracy 66.8 # 74
Top-5 Accuracy 90.3 # 59
Action Recognition Something-Something V2 ILA (ViT-L/14) Top-1 Accuracy 70.2 # 36
Top-5 Accuracy 91.8 # 33

Methods