no code implementations • ECCV 2020 • Sangeek Hyun, Jae-Pil Heo
As is well known, single image super-resolution (SR) is an ill-posed problem where multiple high resolution (HR) images can be matched to one low resolution (LR) image due to the difference of their representation capabilities.
no code implementations • 5 Jun 2024 • Sangeek Hyun, Jae-Pil Heo
Specifically, we design a hierarchy of Gaussians where finer-level Gaussians are parameterized by their coarser-level counterparts; the position of finer-level Gaussians would be located near their coarser-level counterparts, and the scale would monotonically decrease as the level becomes finer, modeling both coarse and fine details of the 3D scene.
1 code implementation • 20 Mar 2024 • Jiwoo Chung, Sangeek Hyun, Sang-Heon Shim, Jae-Pil Heo
Specifically, by assessing channel importance based on their sensitivities to latent vector perturbations, our method enhances the diversity of samples in the compressed model.
1 code implementation • 26 Dec 2023 • Suho Park, SuBeen Lee, Sangeek Hyun, Hyun Seok Seong, Jae-Pil Heo
Based on these two scores, we define a query background relevant score that captures the similarity between the backgrounds of the query and the support, and utilize it to scale support background features to adaptively restrict the impact of disruptive support backgrounds.
1 code implementation • 11 Dec 2023 • Jiwoo Chung, Sangeek Hyun, Jae-Pil Heo
Despite the impressive generative capabilities of diffusion models, existing diffusion model-based style transfer methods require inference-stage optimization (e. g. fine-tuning or textual inversion of style) which is time-consuming, or fails to leverage the generative ability of large-scale diffusion models.
2 code implementations • 15 Nov 2023 • WonJun Moon, Sangeek Hyun, SuBeen Lee, Jae-Pil Heo
Dummy tokens conditioned by text query take portions of the attention weights, preventing irrelevant video clips from being represented by the text query.
Ranked #1 on Highlight Detection on TvSum
1 code implementation • CVPR 2023 • WonJun Moon, Sangeek Hyun, Sanguk Park, Dongchan Park, Jae-Pil Heo
As we observe the insignificant role of a given query in transformer architectures, our encoding module starts with cross-attention layers to explicitly inject the context of text query into video representation.
Ranked #2 on Highlight Detection on TvSum
no code implementations • CVPR 2023 • Haechan Noh, Sangeek Hyun, Woojin Jeong, Hanshin Lim, Jae-Pil Heo
The inverted index is a widely used data structure to avoid the infeasible exhaustive search.
no code implementations • CVPR 2022 • Sang-Heon Shim, Sangeek Hyun, DaeHyun Bae, Jae-Pil Heo
To address this, we propose a novel attention module, Local Attention Pyramid (LAP) module tailored for scene image synthesis, that encourages GANs to generate diverse object classes in a high quality by explicit spread of high attention scores to local regions, since objects in scene images are scattered over the entire images.
no code implementations • CVPR 2021 • Sangeek Hyun, JiHwan Kim, Jae-Pil Heo
The proposed tasks enable the discriminators to learn representations of appearance and temporal context, and force the generator to synthesize videos with consistent appearance and natural flow of motions.