1 code implementation • NAACL 2022 • Muhammad Qorib, Seung-Hoon Na, Hwee Tou Ng
In this paper, we formulate system combination for grammatical error correction (GEC) as a simple machine learning task: binary classification.
Ranked #3 on Grammatical Error Correction on BEA-2019 (test)
no code implementations • COLING 2022 • Eunhwan Park, Jong-Hyeon Lee, Jeon Dong Hyeon, Seonhoon Kim, Inho Kang, Seung-Hoon Na
This study proposes Semantic-Infused SElective Graph Reasoning (SISER) for fact verification, which newly presents semantic-level graph reasoning and injects its reasoning-enhanced representation into other types of graph-based and sequence-based reasoning methods.
1 code implementation • ACL 2022 • Eunhwan Park, Donghyeon Jeon, Seonhoon Kim, Inho Kang, Seung-Hoon Na
LM-BFF (CITATION) achieves significant few-shot performance by using auto-generated prompts and adding demonstrations similar to an input example.
no code implementations • SemEval (NAACL) 2022 • Daewook Kang, Sung-Min Lee, Eunhwan Park, Seung-Hoon Na
In this study, we examine the ability of contextualized representations of pretrained language model to distinguish whether sequences from instructional articles are plausible or implausible.
1 code implementation • SemEval (NAACL) 2022 • Sung-Min Lee, Seung-Hoon Na
This paper describes our system in the SemEval-2022 Task 12: ‘linking mathematical symbols to their descriptions’, achieving first on the leaderboard for all the subtasks comprising named entity extraction (NER) and relation extraction (RE).
Joint Entity and Relation Extraction Machine Reading Comprehension +1
1 code implementation • Conference 2023 • Sung-Min Lee, Eunhwan Park, Daeryong Seo, Donghyeon Jeon, Inho Kang, Seung-Hoon Na
Transformer-based models for question answering (QA) over tables and texts confront a “long” hybrid sequence over tabular and textual elements, causing long-range reasoning problems.
Ranked #1 on Question Answering on HybridQA
1 code implementation • 1 Apr 2022 • Hee-Jun Jung, Doyeon Kim, Seung-Hoon Na, Kangil Kim
To resolve it in transferring, we investigate distillation of structures of representations specified to three types: intra-feature, local inter-feature, global inter-feature structures.
1 code implementation • Applied Sciences 2022 • HeeSeung Jung, Kangil Kim, Jong-Hun Shin, Seung-Hoon Na, SangKeun Jung, Sangmin Woo
Most neural machine translation models are implemented as a conditional language model framework composed of encoder and decoder models.
no code implementations • SEMEVAL 2020 • Seung-Hoon Na, Jong-Hyeon Lee
This paper presents our contributions to the SemEval-2020 Task 4 Commonsense Validation and Explanation (ComVE) and includes the experimental results of the two Subtasks B and C of the SemEval-2020 Task 4.
no code implementations • CONLL 2020 • Seung-Hoon Na, Jinwoo Min
This paper describes the Jeonbuk National University (JBNU) system for the 2020 shared task on Cross-Framework Meaning Representation Parsing at the Conference on Computational Natural Language Learning.
no code implementations • CONLL 2019 • Seung-Hoon Na, Jinwoon Min, Kwanghyeon Park, Jong-Hun Shin, Young-Kil Kim
We propose a unified parsing model using biaffine attention (Dozat and Manning, 2017), consisting of 1) a BERT-BiLSTM encoder and 2) a biaffine attention decoder.
no code implementations • WS 2019 • Hyun Kim, Joon-Ho Lim, Hyun-Ki Kim, Seung-Hoon Na
Our proposed model is re-purposed BERT for the translation quality estimation and uses multi-task learning for the sentence-level task and word-level subtasks (i. e., source word, target word, and target gap).
no code implementations • IJCNLP 2017 • Kangil Kim, Jong-Hun Shin, Seung-Hoon Na, SangKeun Jung
Neural machine translation decoders are usually conditional language models to sequentially generate words for target sentences.
no code implementations • 8 Feb 2015 • Seung-Hoon Na, In-Su Kang, Jong-Hyeok Lee
Although these document characteristics should be differently handled, all previous methods of term frequency normalization have ignored these differences and have used a simplified length-driven approach which decreases the term frequency by only the length of a document, causing an unreasonable penalization.