Cycle Consistency Loss is a type of loss used for generative adversarial networks that performs unpaired image-to-image translation. It was introduced with the CycleGAN architecture. For two domains $X$ and $Y$, we want to learn a mapping $G : X \rightarrow Y$ and $F: Y \rightarrow X$. We want to enforce the intuition that these mappings should be reverses of each other and that both mappings should be bijections. Cycle Consistency Loss encourages $F\left(G\left(x\right)\right) \approx x$ and $G\left(F\left(y\right)\right) \approx y$. It reduces the space of possible mapping functions by enforcing forward and backwards consistency:
$$ \mathcal{L}_{cyc}\left(G, F\right) = \mathbb{E}_{x \sim p_{data}\left(x\right)}\left[||F\left(G\left(x\right)\right) - x||_{1}\right] + \mathbb{E}_{y \sim p_{data}\left(y\right)}\left[||G\left(F\left(y\right)\right) - y||_{1}\right] $$
Source: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial NetworksPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Translation | 101 | 13.99% |
Image-to-Image Translation | 81 | 11.22% |
Image Generation | 41 | 5.68% |
Domain Adaptation | 36 | 4.99% |
Style Transfer | 25 | 3.46% |
Semantic Segmentation | 24 | 3.32% |
Unsupervised Domain Adaptation | 14 | 1.94% |
Super-Resolution | 13 | 1.80% |
Object Detection | 12 | 1.66% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |