Wasserstein Gradient Penalty Loss, or WGAN-GP Loss, is a loss used for generative adversarial networks that augments the Wasserstein loss with a gradient norm penalty for random samples $\mathbf{\hat{x}} \sim \mathbb{P}_{\hat{\mathbf{x}}}$ to achieve Lipschitz continuity:
$$ L = \mathbb{E}_{\mathbf{\hat{x}} \sim \mathbb{P}_{g}}\left[D\left(\tilde{\mathbf{x}}\right)\right] - \mathbb{E}_{\mathbf{x} \sim \mathbb{P}_{r}}\left[D\left(\mathbf{x}\right)\right] + \lambda\mathbb{E}_{\mathbf{\hat{x}} \sim \mathbb{P}_{\hat{\mathbf{x}}}}\left[\left(||\nabla_{\tilde{\mathbf{x}}}D\left(\mathbf{\tilde{x}}\right)||_{2}-1\right)^{2}\right]$$
It was introduced as part of the WGAN-GP overall model.
Source: Improved Training of Wasserstein GANsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Generation | 12 | 16.22% |
Speech Synthesis | 4 | 5.41% |
Translation | 3 | 4.05% |
Voice Conversion | 3 | 4.05% |
Synthetic Data Generation | 3 | 4.05% |
Disentanglement | 3 | 4.05% |
Audio Generation | 2 | 2.70% |
Image-to-Image Translation | 2 | 2.70% |
Singing Voice Synthesis | 2 | 2.70% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |