no code implementations • 18 Feb 2024 • Shirley Anugrah Hayati, Taehee Jung, Tristan Bodding-Long, Sudipta Kar, Abhinav Sethy, Joo-Kyung Kim, Dongyeop Kang
Fine-tuning large language models (LLMs) with a collection of large and diverse instructions has improved the model's generalization to different tasks, even for unseen tasks.
no code implementations • 16 Feb 2024 • Jihyung Kil, Farideh Tavazoee, Dongyeop Kang, Joo-Kyung Kim
II-MMR then analyzes this path to identify different reasoning cases in current VQA benchmarks by estimating how many hops and what types (i. e., visual or beyond-visual) of reasoning are required to answer the question.
no code implementations • 16 Nov 2023 • Jinyoung Park, Ameen Patel, Omar Zia Khan, Hyunwoo J. Kim, Joo-Kyung Kim
Specifically, we first leverage LLMs to construct a "question/rationale graph" by using knowledge extraction prompting given the initial question and the rationales generated in the previous steps.
1 code implementation • 17 Feb 2023 • Taehee Jung, Joo-Kyung Kim, Sungjin Lee, Dongyeop Kang
For extreme multi-label classification (XMC), existing classification-based models poorly perform for tail labels and often ignore the semantic relations among labels, like treating "Wikipedia" and "Wiki" as independent and separate labels.
no code implementations • 25 Sep 2021 • Joo-Kyung Kim, Guoyin Wang, Sungjin Lee, Young-Bum Kim
A large-scale conversational agent can suffer from understanding user utterances with various ambiguities such as ASR ambiguity, intent ambiguity, and hypothesis ambiguity.
no code implementations • 8 Mar 2020 • Joo-Kyung Kim, Young-Bum Kim
In large-scale domain classification, an utterance can be handled by multiple domains with overlapped capabilities.
no code implementations • EMNLP 2018 • Joo-Kyung Kim, Young-Bum Kim
The attention weights are explicitly encouraged to be similar to the corresponding elements of the ground-truth's one-hot vector by supervised attention, and the attention information of the other enabled domains is leveraged through self-distillation.
no code implementations • 29 Jun 2018 • Joo-Kyung Kim, Young-Bum Kim
In domain classification for spoken dialog systems, correct detection of out-of-domain (OOD) utterances is crucial because it reduces confusion and unnecessary interaction costs between users and the systems.
no code implementations • NAACL 2018 • Young-Bum Kim, Dongchan Kim, Joo-Kyung Kim, Ruhi Sarikaya
Intelligent personal digital assistants (IPDAs), a popular real-life application with spoken language understanding capabilities, can cover potentially thousands of overlapping domains for natural language understanding, and the task of finding the best domain to handle an utterance becomes a challenging problem on a large scale.
no code implementations • EMNLP 2017 • Joo-Kyung Kim, Young-Bum Kim, Ruhi Sarikaya, Eric Fosler-Lussier
Evaluating on POS datasets from 14 languages in the Universal Dependencies corpus, we show that the proposed transfer learning model improves the POS tagging performance of the target languages without exploiting any linguistic knowledge between the source language and the target language.