Search Results for author: Sangeek Hyun

Found 10 papers, 5 papers with code

VarSR: Variational Super-Resolution Network for Very Low Resolution Images

no code implementations ECCV 2020 Sangeek Hyun, Jae-Pil Heo

As is well known, single image super-resolution (SR) is an ill-posed problem where multiple high resolution (HR) images can be matched to one low resolution (LR) image due to the difference of their representation capabilities.

Image Super-Resolution

Adversarial Generation of Hierarchical Gaussians for 3D Generative Model

no code implementations5 Jun 2024 Sangeek Hyun, Jae-Pil Heo

Specifically, we design a hierarchy of Gaussians where finer-level Gaussians are parameterized by their coarser-level counterparts; the position of finer-level Gaussians would be located near their coarser-level counterparts, and the scale would monotonically decrease as the level becomes finer, modeling both coarse and fine details of the 3D scene.

3D Generation Position

Diversity-aware Channel Pruning for StyleGAN Compression

1 code implementation20 Mar 2024 Jiwoo Chung, Sangeek Hyun, Sang-Heon Shim, Jae-Pil Heo

Specifically, by assessing channel importance based on their sensitivities to latent vector perturbations, our method enhances the diversity of samples in the compressed model.

Image Generation Unconditional Image Generation

Task-Disruptive Background Suppression for Few-Shot Segmentation

1 code implementation26 Dec 2023 Suho Park, SuBeen Lee, Sangeek Hyun, Hyun Seok Seong, Jae-Pil Heo

Based on these two scores, we define a query background relevant score that captures the similarity between the backgrounds of the query and the support, and utilize it to scale support background features to adaptively restrict the impact of disruptive support backgrounds.

Style Injection in Diffusion: A Training-free Approach for Adapting Large-scale Diffusion Models for Style Transfer

1 code implementation11 Dec 2023 Jiwoo Chung, Sangeek Hyun, Jae-Pil Heo

Despite the impressive generative capabilities of diffusion models, existing diffusion model-based style transfer methods require inference-stage optimization (e. g. fine-tuning or textual inversion of style) which is time-consuming, or fails to leverage the generative ability of large-scale diffusion models.

Style Transfer

Correlation-guided Query-Dependency Calibration in Video Representation Learning for Temporal Grounding

2 code implementations15 Nov 2023 WonJun Moon, Sangeek Hyun, SuBeen Lee, Jae-Pil Heo

Dummy tokens conditioned by text query take portions of the attention weights, preventing irrelevant video clips from being represented by the text query.

Highlight Detection Moment Retrieval +3

Query-Dependent Video Representation for Moment Retrieval and Highlight Detection

1 code implementation CVPR 2023 WonJun Moon, Sangeek Hyun, Sanguk Park, Dongchan Park, Jae-Pil Heo

As we observe the insignificant role of a given query in transformer architectures, our encoding module starts with cross-attention layers to explicitly inject the context of text query into video representation.

Highlight Detection Moment Retrieval +4

Local Attention Pyramid for Scene Image Generation

no code implementations CVPR 2022 Sang-Heon Shim, Sangeek Hyun, DaeHyun Bae, Jae-Pil Heo

To address this, we propose a novel attention module, Local Attention Pyramid (LAP) module tailored for scene image synthesis, that encourages GANs to generate diverse object classes in a high quality by explicit spread of high attention scores to local regions, since objects in scene images are scattered over the entire images.

Image Generation Object

Self-Supervised Video GANs: Learning for Appearance Consistency and Motion Coherency

no code implementations CVPR 2021 Sangeek Hyun, JiHwan Kim, Jae-Pil Heo

The proposed tasks enable the discriminators to learn representations of appearance and temporal context, and force the generator to synthesize videos with consistent appearance and natural flow of motions.

Contrastive Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.