1 code implementation • 10 May 2024 • Joonho Lee, Jae Oh Woo, Juree Seok, Parisa Hassanzadeh, Wooseok Jang, JuYoun Son, Sima Didari, Baruch Gutow, Heng Hao, Hankyu Moon, WenJun Hu, Yeong-Dae Kwon, TaeHee Lee, Seungjai Min
Assessing response quality to instructions in language models is vital but challenging due to the complexity of human language across different contexts.
no code implementations • 16 Aug 2021 • Yonghyun Jeong, Doyeon Kim, Seungjai Min, Seongho Joe, Youngjune Gwon, Jongwon Choi
The advancement in numerous generative models has a two-fold effect: a simple and easy generation of realistic synthesized images, but also an increased risk of malicious abuse of those images.
no code implementations • 27 Jan 2021 • Hyunjae Lee, Jaewoong Yoon, Bonggyu Hwang, Seongho Joe, Seungjai Min, Youngjune Gwon
A Lite BERT (ALBERT) has been introduced to scale up deep bidirectional representation learning for natural languages.
no code implementations • 26 Jan 2021 • Hyunjin Choi, Judong Kim, Seongho Joe, Seungjai Min, Youngjune Gwon
In zero-shot cross-lingual transfer, a supervised NLP task trained on a corpus in one language is directly applicable to another language without any additional training.
no code implementations • 16 Jan 2021 • Byoungjip Kim, Jinho Choo, Yeong-Dae Kwon, Seongho Joe, Seungjai Min, Youngjune Gwon
This paper introduces SelfMatch, a semi-supervised learning method that combines the power of contrastive self-supervised learning and consistency regularization.
2 code implementations • NeurIPS 2020 • Yeong-Dae Kwon, Jinho Choo, Byoungjip Kim, Iljoo Yoon, Youngjune Gwon, Seungjai Min
We introduce Policy Optimization with Multiple Optima (POMO), an end-to-end approach for building such a heuristic solver.