AugGAN: Cross Domain Adaptation with GAN-based Data Augmentation

Deep learning based image-to-image translation methods aim at learning the joint distribution of the two domains and finding transformations between them. Despite recent GAN (Generative Adversarial Network) based methods have shown compelling visual results, they are prone to fail at preserving image-objects and maintaining translation consistency when faced with large and complex domain shifts, which reduces their practicality on tasks such as generating large-scale training data for different domains. To address this problem, we purpose a weakly supervised structure-aware image-to-image translation network, which is composed of encoders, generators, discriminators and parsing nets for the two domains, respectively, in a unified framework. The purposed network generates more visually plausible images of a different domain compared to the competing methods on different image-translation tasks. In addition, we quantitatively evaluate different methods by training Faster-RCNN and YOLO with datasets generated from the image-translation results and demonstrate significant improvement of the detection accuracies by using the proposed image-object preserving network.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Link Prediction WN18 TransE-Skipgram Hits@10 0.7987 # 31

Methods