1 code implementation • 19 Mar 2024 • Beomsu Kim, JaeMin Kim, Jeongsol Kim, Jong Chul Ye
Diffusion-based generative models excel in unconditional generation, as well as on applied tasks such as image editing and restoration.
no code implementations • 18 Mar 2024 • Jeongsol Kim, Geon Yeong Park, Jong Chul Ye
Reverse sampling and score-distillation have emerged as main workhorses in recent years for image manipulation using latent diffusion models (LDMs).
no code implementations • 27 Nov 2023 • Jeongsol Kim, Geon Yeong Park, Hyungjin Chung, Jong Chul Ye
The recent advent of diffusion models has led to significant progress in solving inverse problems, leveraging these models as effective generative priors.
1 code implementation • NeurIPS 2023 • Geon Yeong Park, Jeongsol Kim, Beomsu Kim, Sang Wan Lee, Jong Chul Ye
Despite the remarkable performance of text-to-image diffusion models in image generation tasks, recent studies have raised the issue that generated images sometimes cannot capture the intended semantic contents of the text prompts, which phenomenon is often called semantic misalignment.
no code implementations • CVPR 2023 • Hyungjin Chung, Jeongsol Kim, Sehui Kim, Jong Chul Ye
We show the efficacy of our method on two representative tasks -- blind deblurring, and imaging through turbulence -- and show that our method yields state-of-the-art performance, while also being flexible to be applicable to general blind inverse problems when we know the functional forms.
2 code implementations • 29 Sep 2022 • Hyungjin Chung, Jeongsol Kim, Michael T. McCann, Marc L. Klasky, Jong Chul Ye
Diffusion models have been recently studied as powerful generative inverse problem solvers, owing to their high quality reconstructions and the ease of combining existing iterative solvers.
no code implementations • NeurIPS 2021 • Sangjoon Park, Gwanghyun Kim, Jeongsol Kim, Boah Kim, Jong Chul Ye
For example, this enables neural network training for COVID-19 diagnosis on chest X-ray (CXR) images without collecting patient CXR data across multiple hospitals.
no code implementations • 2 Nov 2021 • Sangjoon Park, Gwanghyun Kim, Jeongsol Kim, Boah Kim, Jong Chul Ye
For example, this enables neural network training for COVID-19 diagnosis on chest X-ray (CXR) images without collecting patient CXR data across multiple hospitals.
1 code implementation • 29 Sep 2021 • Boah Kim, Jeongsol Kim, Jong Chul Ye
Inspired by the recent success of Vision Transformer (ViT), here we present a new distributed learning framework for image processing applications, allowing clients to learn multiple tasks with their private data.
no code implementations • NeurIPS 2021 • Sangjoon Park, Gwanghyun Kim, Jeongsol Kim, Boah Kim, Jong Chul Ye
For example, this enables neural network training for COVID-19 diagnosis on chest X-ray (CXR) images without collecting patient CXR data across multiple hospitals.
no code implementations • 25 Sep 2019 • Byeongsu Sim, Gyutaek Oh, Jeongsol Kim, Chanyong Jung, Jong Chul Ye
To improve the performance of classical generative adversarial network (GAN), Wasserstein generative adversarial networks (W-GAN) was developed as a Kantorovich dual formulation of the optimal transport (OT) problem using Wasserstein-1 distance.