Statistics of Deep Generated Images

9 Aug 2017  ·  Yu Zeng, Huchuan Lu, Ali Borji ·

Here, we explore the low-level statistics of images generated by state-of-the-art deep generative models. First, Variational auto-encoder (VAE~\cite{kingma2013auto}), Wasserstein generative adversarial network (WGAN~\cite{arjovsky2017wasserstein}) and deep convolutional generative adversarial network (DCGAN~\cite{radford2015unsupervised}) are trained on the ImageNet dataset and a large set of cartoon frames from animations. Then, for images generated by these models as well as natural scenes and cartoons, statistics including mean power spectrum, the number of connected components in a given image area, distribution of random filter responses, and contrast distribution are computed. Our analyses on training images support current findings on scale invariance, non-Gaussianity, and Weibull contrast distribution of natural scenes. We find that although similar results hold over cartoon images, there is still a significant difference between statistics of natural scenes and images generated by VAE, DCGAN and WGAN models. In particular, generated images do not have scale invariant mean power spectrum magnitude, which indicates existence of extra structures in these images. Inspecting how well the statistics of deep generated images match the known statistical properties of natural images, such as scale invariance, non-Gaussianity, and Weibull contrast distribution, can a) reveal the degree to which deep learning models capture the essence of the natural scenes, b) provide a new dimension to evaluate models, and c) allow possible improvement of image generative models (e.g., via defining new loss functions).

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods