Robust Diffusion GAN using Semi-Unbalanced Optimal Transport

28 Nov 2023  ·  Quan Dao, Binh Ta, Tung Pham, Anh Tran ·

Diffusion models, a type of generative model, have demonstrated great potential for synthesizing highly detailed images. By integrating with GAN, advanced diffusion models like DDGAN \citep{xiao2022DDGAN} could approach real-time performance for expansive practical applications. While DDGAN has effectively addressed the challenges of generative modeling, namely producing high-quality samples, covering different data modes, and achieving faster sampling, it remains susceptible to performance drops caused by datasets that are corrupted with outlier samples. This work introduces a robust training technique based on semi-unbalanced optimal transport to mitigate the impact of outliers effectively. Through comprehensive evaluations, we demonstrate that our robust diffusion GAN (RDGAN) outperforms vanilla DDGAN in terms of the aforementioned generative modeling criteria, i.e., image quality, mode coverage of distribution, and inference speed, and exhibits improved robustness when dealing with both clean and corrupted datasets.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Generation CelebA-HQ 256x256 RDGAN FID 5.6 # 5
Recall 0.38 # 2
Image Generation CIFAR-10 RDGAN FID 3.53 # 54
Recall 0.56 # 4
Image Generation STL-10 RDGAN FID 13.07 # 5
Recall 0.47 # 1

Methods