no code implementations • 8 Jan 2024 • Geunhyuk Youk, Jihyong Oh, Munchurl Kim
In this paper, we propose a novel flow-guided dynamic filtering (FGDF) and iterative feature refinement with multi-attention (FRMA), which constitutes our VSRDB framework, denoted as FMA-Net.
no code implementations • 21 Dec 2023 • Minh-Quan Viet Bui, Jongmin Park, Jihyong Oh, Munchurl Kim
In response, we propose a novel dynamic deblurring NeRF framework for blurry monocular video, called DyBluRF, consisting of a Base Ray Initialization (BRI) stage and a Motion Decomposition-based Deblurring (MDD) stage.
1 code implementation • 19 Nov 2021 • Jihyong Oh, Munchurl Kim
In this paper, we propose a novel joint deblurring and multi-frame interpolation (DeMFI) framework, called DeMFI-Net, which accurately converts blurry videos of lower-frame-rate to sharp videos at higher-frame-rate based on flow-guided attentive-correlation-based feature bolstering (FAC-FB) module and recursive boosting (RB), in terms of multi-frame interpolation (MFI).
1 code implementation • ICCV 2021 • Hyeonjun Sim, Jihyong Oh, Munchurl Kim
In this paper, we firstly present a dataset (X4K1000FPS) of 4K videos of 1000 fps with the extreme motion to the research community for video frame interpolation (VFI), and propose an extreme VFI network, called XVFI-Net, that first handles the VFI for 4K videos with large motion.
no code implementations • 29 Mar 2021 • Jihyong Oh, Munchurl Kim
In this paper, we firstly propose a novel GAN-based multi-task learning (MTL) method for SAR target image generation, called PeaceGAN that uses both pose angle and target class information, which makes it possible to produce SAR target images of desired target classes at intended pose angles.
1 code implementation • 16 Dec 2019 • Soo Ye Kim, Jihyong Oh, Munchurl Kim
In this paper, we first propose a joint VFI-SR framework for up-scaling the spatio-temporal resolution of videos from 2K 30 fps to 4K 60 fps.
1 code implementation • 10 Sep 2019 • Soo Ye Kim, Jihyong Oh, Munchurl Kim
Joint learning of super-resolution (SR) and inverse tone-mapping (ITM) has been explored recently, to convert legacy low resolution (LR) standard dynamic range (SDR) videos to high resolution (HR) high dynamic range (HDR) videos for the growing need of UHD HDR TV/broadcasting applications.
1 code implementation • ICCV 2019 • Soo Ye Kim, Jihyong Oh, Munchurl Kim
Joint SR and ITM is an intricate task, where high frequency details must be restored for SR, jointly with the local contrast, for ITM.