Diverse Video Captioning by Adaptive Spatio-temporal Attention

19 Aug 2022  ·  Zohreh Ghaderi, Leonard Salewski, Hendrik P. A. Lensch ·

To generate proper captions for videos, the inference needs to identify relevant concepts and pay attention to the spatial relationships between them as well as to the temporal development in the clip. Our end-to-end encoder-decoder video captioning framework incorporates two transformer-based architectures, an adapted transformer for a single joint spatio-temporal video analysis as well as a self-attention-based decoder for advanced text generation. Furthermore, we introduce an adaptive frame selection scheme to reduce the number of required incoming frames while maintaining the relevant content when training both transformers. Additionally, we estimate semantic concepts relevant for video captioning by aggregating all ground truth captions of each sample. Our approach achieves state-of-the-art results on the MSVD, as well as on the large-scale MSR-VTT and the VATEX benchmark datasets considering multiple Natural Language Generation (NLG) metrics. Additional evaluations on diversity scores highlight the expressiveness and diversity in the structure of our generated captions.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Captioning MSR-VTT VASTA (Vatex-backbone) CIDEr 56.08 # 19
METEOR 30.24 # 14
ROUGE-L 62.9 # 16
BLEU-4 44.21 # 18
Video Captioning MSR-VTT VASTA (Kinetics-backbone) CIDEr 55 # 20
METEOR 30.2 # 15
ROUGE-L 62.5 # 17
BLEU-4 43.4 # 20
Video Captioning MSVD VASTA (Vatex-backbone) CIDEr 119.7 # 11
BLEU-4 59.2 # 9
METEOR 40.65 # 7
ROUGE-L 76.7 # 7
Video Captioning MSVD VASTA (Kinetics-backbone) CIDEr 106.4 # 14
BLEU-4 56.1 # 12
METEOR 39.1 # 9
ROUGE-L 74.5 # 11
Video Captioning VATEX VASTA (Kinetics-backbone) BLEU-4 36.25 # 7
CIDEr 65.07 # 6
METEOR 25.32 # 3
ROUGE-L 51.88 # 6

Methods


No methods listed for this paper. Add relevant methods here