Exploring Spatial-Temporal Multi-Frequency Analysis for High-Fidelity and Temporal-Consistency Video Prediction

Video prediction is a pixel-wise dense prediction task to infer future frames based on past frames. Missing appearance details and motion blur are still two major problems for current predictive models, which lead to image distortion and temporal inconsistency. In this paper, we point out the necessity of exploring multi-frequency analysis to deal with the two problems. Inspired by the frequency band decomposition characteristic of Human Vision System (HVS), we propose a video prediction network based on multi-level wavelet analysis to deal with spatial and temporal information in a unified manner. Specifically, the multi-level spatial discrete wavelet transform decomposes each video frame into anisotropic sub-bands with multiple frequencies, helping to enrich structural information and reserve fine details. On the other hand, multi-level temporal discrete wavelet transform which operates on time axis decomposes the frame sequence into sub-band groups of different frequencies to accurately capture multi-frequency motions under a fixed frame rate. Extensive experiments on diverse datasets demonstrate that our model shows significant improvements on fidelity and temporal consistency over state-of-the-art works.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract

Results from the Paper


 Ranked #1 on Video Prediction on KTH (PSNR metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Generation BAIR Robot Pushing WAM FVD score 159.6 # 20
Cond 2 # 13
SSIM 0.844 # 1
PSNR 21.02 # 1
LPIPS 0.0936 # 1
Pred 28 # 20
Train 14 # 12
Video Prediction KTH WAM PSNR 29.85 # 1
SSIM 0.893 # 2
Cond 10 # 1
Pred 20 # 1

Methods


No methods listed for this paper. Add relevant methods here