1 code implementation • 17 Apr 2024 • Jaehyung Kim, Jaehyun Nam, Sangwoo Mo, Jongjin Park, Sang-Woo Lee, Minjoon Seo, Jung-Woo Ha, Jinwoo Shin
While incorporating new information with the retrieval of relevant passages is a promising way to improve QA with LLMs, the existing methods often require additional fine-tuning which becomes infeasible with recent LLMs.
1 code implementation • 16 Apr 2024 • Woomin Song, Seunghyuk Oh, Sangwoo Mo, Jaehyung Kim, Sukmin Yun, Jung-Woo Ha, Jinwoo Shin
Large language models (LLMs) have shown remarkable performance in various natural language processing tasks.
1 code implementation • NeurIPS 2023 • Sangwoo Mo, Minkyu Kim, Kyungmin Lee, Jinwoo Shin
By combining these objectives, S-CLIP significantly enhances the training of CLIP using only a few image-text pairs, as demonstrated in various specialist domains, including remote sensing, fashion, scientific figures, and comics.
1 code implementation • 28 Feb 2023 • Sangwoo Mo, Jong-Chyi Su, Chih-Yao Ma, Mido Assran, Ishan Misra, Licheng Yu, Sean Bell
Semi-supervised learning aims to train a model using limited labels.
no code implementations • NeurIPS 2023 • Hyosoon Jang, Seonghyun Park, Sangwoo Mo, Sungsoo Ahn
This paper studies structured node classification on graphs, where the predictions should consider dependencies between the node labels.
1 code implementation • 26 Jan 2023 • Younghyun Kim, Sangwoo Mo, Minkyu Kim, Kyungmin Lee, Jaeho Lee, Jinwoo Shin
The keyword explanation form of visual bias offers several advantages, such as a clear group naming for bias discovery and a natural extension for debiasing using these group names.
2 code implementations • 13 Dec 2022 • Hyunwoo Kang, Sangwoo Mo, Jinwoo Shin
Using the object labels, OAMixer computes a reweighting mask with a learnable scale parameter that intensifies the interaction of patches containing similar objects and applies the mask to the patch mixing layers.
no code implementations • 5 Dec 2022 • Junhyun Nam, Sangwoo Mo, Jaeho Lee, Jinwoo Shin
(a) Fairness Intervention (FI): emphasize the minority samples that are hard to generate due to the spurious correlation in the training dataset.
1 code implementation • 1 Oct 2022 • Tsung-Wei Ke, Sangwoo Mo, Stella X. Yu
Large vision and language models learned directly through image-text associations often lack detailed visual substantiation, whereas image segmentation tasks are treated separately from recognition, supervisedly learned without interconnections.
1 code implementation • ICLR 2022 • Sihyun Yu, Jihoon Tack, Sangwoo Mo, Hyunsu Kim, Junho Kim, Jung-Woo Ha, Jinwoo Shin
In this paper, we found that the recent emerging paradigm of implicit neural representations (INRs) that encodes a continuous signal into a parameterized neural network effectively mitigates the issue.
Ranked #25 on Video Generation on UCF-101
1 code implementation • NeurIPS 2021 • Sangwoo Mo, Hyunwoo Kang, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin
Contrastive self-supervised learning has shown impressive results in learning visual representations from unlabeled images by enforcing invariance against different data augmentations.
no code implementations • 22 Jul 2021 • Sihyun Yu, Sangwoo Mo, Sungsoo Ahn, Jinwoo Shin
Abstract reasoning, i. e., inferring complicated patterns from given observations, is a central building block of artificial general intelligence.
1 code implementation • 17 Dec 2020 • Seung Jun Moon, Sangwoo Mo, Kimin Lee, Jaeho Lee, Jinwoo Shin
We claim that one central obstacle to the reliability is the over-reliance of the model on a limited number of keywords, instead of looking at the whole context.
1 code implementation • ICLR 2021 • Jaeho Lee, Sejun Park, Sangwoo Mo, Sungsoo Ahn, Jinwoo Shin
Recent discoveries on neural network pruning reveal that, with a carefully chosen layerwise sparsity, a simple magnitude-based pruning achieves state-of-the-art tradeoff between sparsity and performance.
1 code implementation • NeurIPS 2020 • Jihoon Tack, Sangwoo Mo, Jongheon Jeong, Jinwoo Shin
Based on this, we propose a new detection score that is specific to the proposed training scheme.
4 code implementations • 25 Feb 2020 • Sangwoo Mo, Minsu Cho, Jinwoo Shin
Generative adversarial networks (GANs) have shown outstanding performance on a wide range of problems in computer vision, graphics, and machine learning, but often require numerous training data and heavy computational resources.
Ranked #5 on 10-shot image generation on Babies
1 code implementation • ICLR 2020 • Sejun Park, Jaeho Lee, Sangwoo Mo, Jinwoo Shin
Magnitude-based pruning is one of the simplest methods for pruning neural networks.
1 code implementation • NeurIPS 2019 • Sangwoo Mo, Chiheon Kim, Sungwoong Kim, Minsu Cho, Jinwoo Shin
Conditional generative adversarial networks (cGANs) have gained a considerable attention in recent years due to its class-wise controllability and superior quality for complex generation tasks.
1 code implementation • ICLR 2019 • Sangwoo Mo, Minsu Cho, Jinwoo Shin
Unsupervised image-to-image translation has gained considerable attention due to the recent impressive progress based on generative adversarial networks (GANs).
1 code implementation • 28 Dec 2018 • Sangwoo Mo, Minsu Cho, Jinwoo Shin
Our comparative evaluation demonstrates the effectiveness of the proposed method on different image datasets, in particular, in the aforementioned challenging cases.