Search Results for author: Kai Katsumata

Found 4 papers, 1 papers with code

Revisiting Latent Space of GAN Inversion for Real Image Editing

no code implementations18 Jul 2023 Kai Katsumata, Duc Minh Vo, Bei Liu, Hideki Nakayama

The exploration of the latent space in StyleGANs and GAN inversion exemplify impressive real-world image editing, yet the trade-off between reconstruction quality and editing quality remains an open problem.

Soft Curriculum for Learning Conditional GANs with Noisy-Labeled and Uncurated Unlabeled Data

no code implementations17 Jul 2023 Kai Katsumata, Duc Minh Vo, Tatsuya Harada, Hideki Nakayama

Label-noise or curated unlabeled data is used to compensate for the assumption of clean labeled data in training the conditional generative adversarial network; however, satisfying such an extended assumption is occasionally laborious or impractical.

Conditional Image Generation Generative Adversarial Network

Balancing Reconstruction and Editing Quality of GAN Inversion for Real Image Editing with StyleGAN Prior Latent Space

no code implementations31 May 2023 Kai Katsumata, Duc Minh Vo, Bei Liu, Hideki Nakayama

The exploration of the latent space in StyleGANs and GAN inversion exemplify impressive real-world image editing, yet the trade-off between reconstruction quality and editing quality remains an open problem.

OSSGAN: Open-Set Semi-Supervised Image Generation

1 code implementation CVPR 2022 Kai Katsumata, Duc Minh Vo, Hideki Nakayama

We introduce a challenging training scheme of conditional GANs, called open-set semi-supervised image generation, where the training dataset consists of two parts: (i) labeled data and (ii) unlabeled data with samples belonging to one of the labeled data classes, namely, a closed-set, and samples not belonging to any of the labeled data classes, namely, an open-set.

Image Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.