1 code implementation • ACL 2022 • Mingda Chen, Zewei Chu, Sam Wiseman, Kevin Gimpel
Since characters are fundamental to TV series, we also propose two entity-centric evaluation metrics.
1 code implementation • Findings (ACL) 2021 • Zewei Chu, Karl Stratos, Kevin Gimpel
This reliance causes dataless classifiers to be highly sensitive to the choice of label descriptions and hinders the broader application of dataless classification in practice.
Ranked #3 on Zero-Shot Text Classification on AG News
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Mingda Chen, Zewei Chu, Karl Stratos, Kevin Gimpel
Accurate lexical entailment (LE) and natural language inference (NLI) often require large quantities of costly annotations.
1 code implementation • AKBC 2021 • Zewei Chu, Karl Stratos, Kevin Gimpel
We describe NatCat, a large-scale resource for text classification constructed from three data sources: Wikipedia, Stack Exchange, and Reddit.
1 code implementation • 21 Nov 2019 • Zewei Chu, Mingda Chen, Jing Chen, Miaosen Wang, Kevin Gimpel, Manaal Faruqui, Xiance Si
We present a large-scale dataset for the task of rewriting an ill-formed natural language question to a well-formed one.
2 code implementations • IJCNLP 2019 • Mingda Chen, Zewei Chu, Kevin Gimpel
Prior work on pretrained sentence embeddings and benchmarks focus on the capabilities of stand-alone sentences.
2 code implementations • IJCNLP 2019 • Mingda Chen, Zewei Chu, Yang Chen, Karl Stratos, Kevin Gimpel
Rich entity representations are useful for a wide class of problems involving entities.
no code implementations • NAACL 2019 • Jun Seok Kang, Robert L. Logan IV, Zewei Chu, Yang Chen, Dheeru Dua, Kevin Gimpel, Sameer Singh, Niranjan Balasubramanian
Given a sentence about a target entity, the task is to automatically generate a post-modifier phrase that provides contextually relevant information about the entity.
no code implementations • EACL 2017 • Zewei Chu, Hai Wang, Kevin Gimpel, David Mcallester
Progress in text understanding has been driven by large datasets that test particular capabilities, like recent datasets for reading comprehension (Hermann et al., 2015).
Ranked #32 on Language Modelling on LAMBADA