1 code implementation • 30 Jan 2024 • Saelyne Yang, Sunghyun Park, Yunseok Jang, Moontae Lee
Experiments with answerability classification tasks demonstrate the complexity of YTCommentQA and emphasize the need to comprehend the combined role of visual and script information in video reasoning.
no code implementations • 17 Feb 2023 • Lajanugen Logeswaran, Sungryull Sohn, Yunseok Jang, Moontae Lee, Honglak Lee
This work explores the problem of generating task graphs of real-world activities.
no code implementations • 17 Feb 2023 • Yunseok Jang, Sungryull Sohn, Lajanugen Logeswaran, Tiange Luo, Moontae Lee, Honglak Lee
Real-world tasks consist of multiple inter-dependent subtasks (e. g., a dirty pan needs to be washed before it can be used for cooking).
no code implementations • 14 May 2022 • Yunseok Jang, Ruben Villegas, Jimei Yang, Duygu Ceylan, Xin Sun, Honglak Lee
We test the effectiveness of our representation on the human image harmonization task by predicting shading that is coherent with a given background image.
1 code implementation • ICCV 2019 • Yunseok Jang, Tianchen Zhao, Seunghoon Hong, Honglak Lee
With the remarkable success of deep learning, Deep Neural Networks (DNNs) have been applied as dominant tools to various machine learning domains.
no code implementations • ICLR 2019 • Dingdong Yang, Seunghoon Hong, Yunseok Jang, Tianchen Zhao, Honglak Lee
We propose a simple yet highly effective method that addresses the mode-collapse problem in the Conditional Generative Adversarial Network (cGAN).
no code implementations • ICML 2018 • Yunseok Jang, Gunhee Kim, Yale Song
Video prediction aims to generate realistic future frames by learning dynamic visual patterns.
2 code implementations • CVPR 2017 • Yunseok Jang, Yale Song, Youngjae Yu, Youngjin Kim, Gunhee Kim
In this paper, we focus on extending VQA to the video domain and contribute to the literature in three important ways.
Ranked #33 on Visual Question Answering (VQA) on MSRVTT-QA