Learning Perceptual Compression of Facial Video

29 Sep 2021  ·  Mustafa Shukor, Xu Yao, Bharath Bhushan Damodaran, Pierre Hellier ·

We propose in this paper a new paradigm for facial video compression. We leverage the generative capacity of GANs such as StyleGAN to represent and compress each video frame (intra compression), as well as the successive differences between frames (inter compression). Each frame is inverted in the latent space of StyleGAN, where the optimal compression is learned. To do so, a diffeomorphic latent representation is learned using a normalizing flows model, where an entropy model can be optimized for image coding. In addition, we propose a new perceptual loss that is more efficient than other counterparts (LPIPS, VGG16). Finally, an entropy model for inter coding with residual is also learned in the previously constructed latent space. Our method (SGANC) is simple, faster to train, and achieves competitive results for image and video coding compared to state-of-the-art codecs such as VTM, AV1, and recent deep learning techniques.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods