no code implementations • EMNLP 2021 • Yiming Ju, Yuanzhe Zhang, Zhixing Tian, Kang Liu, Xiaohuan Cao, Wenting Zhao, Jinlong Li, Jun Zhao
Multiple-choice MRC is one of the most studied tasks in MRC due to the convenience of evaluation and the flexibility of answer format.
no code implementations • ICML 2020 • Di Chen, Yiwei Bai, Wenting Zhao, Sebastian Ament, John Gregoire, Carla Gomes
We introduce Deep Reasoning Networks (DRNets), an end-to-end framework that combines deep learning with constraint reasoning for solving pattern de-mixing problems, typically in an unsupervised or very-weakly-supervised setting.
no code implementations • 2 May 2024 • Wenting Zhao, Xiang Ren, Jack Hessel, Claire Cardie, Yejin Choi, Yuntian Deng
In addition to timestamped chat transcripts, we enrich the dataset with demographic data, including state, country, and hashed IP addresses, alongside request headers.
no code implementations • 17 Dec 2023 • Wenting Zhao, Ye Liu, Yao Wan, Yibo Wang, Qingyang Wu, Zhongfen Deng, Jiangshu Du, Shuaiqi Liu, Yunlong Xu, Philip S. Yu
Task-Oriented Parsing (TOP) enables conversational assistants to interpret user commands expressed in natural language, transforming them into structured outputs that combine elements of both natural language and intent/slot tags.
2 code implementations • 22 Nov 2023 • John X. Morris, Wenting Zhao, Justin T. Chiu, Vitaly Shmatikov, Alexander M. Rush
We consider the problem of language model inversion and show that next-token probabilities contain a surprising amount of information about the preceding text.
no code implementations • 14 Nov 2023 • Wenting Zhao, Justin T Chiu, Jena D. Hwang, Faeze Brahman, Jack Hessel, Sanjiban Choudhury, Yejin Choi, Xiang Lorraine Li, Alane Suhr
To instead investigate the ability to model unusual, unexpected, and unlikely situations, we explore the task of uncommonsense abductive reasoning.
1 code implementation • 13 Nov 2023 • Huihan Li, Yuting Ning, Zeyi Liao, Siyuan Wang, Xiang Lorraine Li, Ximing Lu, Wenting Zhao, Faeze Brahman, Yejin Choi, Xiang Ren
We further use the data generated by LINK to construct a dataset Logic-Induced-Long-Tail (LINT) that can be used to evaluate downstream models on the long-tail distribution; LINT contains 108K knowledge statements spanning four domains.
1 code implementation • 7 Nov 2023 • Zhongfen Deng, Hao Peng, Tao Zhang, Shuaiqi Liu, Wenting Zhao, Yibo Wang, Philip S. Yu
Furthermore, the copy mechanism in value generator and the value attention module in value classifier help our model address the data discrepancy issue by only focusing on the relevant part of input text and ignoring other information which causes the discrepancy issue such as sentence structure in the text.
no code implementations • 7 Nov 2023 • Zhongfen Deng, Seunghyun Yoon, Trung Bui, Franck Dernoncourt, Quan Hung Tran, Shuaiqi Liu, Wenting Zhao, Tao Zhang, Yibo Wang, Philip S. Yu
Then we merge the sentences selected for a specific aspect as the input for the summarizer to produce the aspect-based summary.
no code implementations • 31 Oct 2023 • Wenting Zhao, Ye Liu, Tong Niu, Yao Wan, Philip S. Yu, Shafiq Joty, Yingbo Zhou, Semih Yavuz
Moreover, a significant gap in the current landscape is the absence of a realistic benchmark for evaluating the effectiveness of grounding LLMs on heterogeneous knowledge sources (e. g., knowledge base and text).
1 code implementation • 26 Oct 2023 • Justin T. Chiu, Wenting Zhao, Derek Chen, Saujas Vaduguru, Alexander M. Rush, Daniel Fried
Large language models (LLMs) excel at processing and generating both text and code.
1 code implementation • 20 Sep 2023 • Yibo Wang, Wenting Zhao, Yao Wan, Zhongfen Deng, Philip S. Yu
In this paper, we propose to incorporate the label dependencies among entity types into a multi-task learning framework for better MRC-based NER.
no code implementations • 20 Sep 2023 • Wenting Zhao, Ye Liu, Yao Wan, Yibo Wang, Zhongfen Deng, Philip S. Yu
Furthermore, TAG-QA outperforms the end-to-end model T5 by 16% and 12% on BLEU-4 and PARENT F-score, respectively.
1 code implementation • ICCV 2023 • Qianxiong Xu, Wenting Zhao, Guosheng Lin, Cheng Long
Moreover, when calculating SCCA, we design a scaled-cosine mechanism to better utilize the support features for similarity calculation.
Ranked #8 on Few-Shot Semantic Segmentation on COCO-20i (5-shot)
no code implementations • 29 Jul 2023 • Yibo Wang, Yanbing Xue, Bo Liu, Musen Wen, Wenting Zhao, Stephen Guo, Philip S. Yu
Position bias, the phenomenon whereby users tend to focus on higher-ranked items of the search result list regardless of the actual relevance to queries, is prevailing in many ranking systems.
no code implementations • 18 Jun 2023 • Guangbu Liu, Tong Zhang, Xudong Wang, Wenting Zhao, Chuanwei Zhou, Zhen Cui
Instead of a plain use of a base graph dictionary, we propose the variational graph dictionary adaptation (VGDA) to generate a personalized dictionary (named adapted graph dictionary) for catering to each input graph.
no code implementations • 24 May 2023 • Wenting Zhao, Justin T. Chiu, Claire Cardie, Alexander M. Rush
Instead of using direct supervision, this work proposes an approach for abductive commonsense reasoning that exploits the fact that only a subset of explanations is correct for a given context.
no code implementations • 23 May 2023 • Wenting Zhao, Justin T. Chiu, Claire Cardie, Alexander M. Rush
Explainable multi-hop question answering (QA) not only predicts answers but also identifies rationales, i. e. subsets of input sentences used to derive the answers.
no code implementations • 5 Jan 2023 • Wenting Zhao, Ibrahim Abdelaziz, Julian Dolby, Kavitha Srinivas, Mossad Helali, Essam Mansour
We demonstrate the efficiency and usefulness of Serenity's analysis in two applications: code completion and automated machine learning.
1 code implementation • NAACL 2022 • Wenting Zhao, Konstantine Arkoudas, Weiqi Sun, Claire Cardie
Task-oriented parsing (TOP) aims to convert natural language into machine-readable representations of specific tasks, such as setting an alarm.
1 code implementation • Findings (ACL) 2022 • Yang Wu, Yanyan Zhao, Hao Yang, Song Chen, Bing Qin, Xiaohuan Cao, Wenting Zhao
Through further analysis of the ASR outputs, we find that in some cases the sentiment words, the key sentiment elements in the textual modality, are recognized as other words, which makes the sentiment of the text change and hurts the performance of multimodal sentiment models directly.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
1 code implementation • Findings (EMNLP) 2021 • Wenting Zhao, Ye Liu, Yao Wan, Philip S. Yu
Few-shot table-to-text generation is a task of composing fluent and faithful sentences to convey table content using limited data.
no code implementations • 21 Aug 2021 • Di Chen, Yiwei Bai, Sebastian Ament, Wenting Zhao, Dan Guevarra, Lan Zhou, Bart Selman, R. Bruce van Dover, John M. Gregoire, Carla P. Gomes
DRNets compensate for the limited data by exploiting and magnifying the rich prior knowledge about the thermodynamic rules governing the mixtures of crystals with constraint reasoning seamlessly integrated into neural network optimization.
no code implementations • EACL 2021 • Ye Liu, Yao Wan, JianGuo Zhang, Wenting Zhao, Philip Yu
In this paper, we claim that the syntactic and semantic structures among natural language are critical for non-autoregressive machine translation and can further improve the performance.
no code implementations • 9 Mar 2021 • Wenting Zhao, Shufeng Kong, Junwen Bai, Daniel Fink, Carla Gomes
This in turn leads to a challenging and long-standing problem in the field of computer science - how to perform ac-curate multi-label classification with hundreds of labels?
no code implementations • 16 Feb 2021 • Wenting Zhao, Carla Gomes
In the real world, it is more common to deal with noisy datasets than clean datasets, given how modern datasets are labeled by a large group of annotators on crowdsourcing platforms, but little attention has been given to evaluating multi-label classifiers with noisy labels.
no code implementations • 5 Feb 2021 • Yiwei Bai, Wenting Zhao, Carla P. Gomes
There has been an increasing interest in harnessing deep learning to tackle combinatorial optimization (CO) problems in recent years.
no code implementations • 22 Jan 2021 • Ye Liu, Yao Wan, Jian-Guo Zhang, Wenting Zhao, Philip S. Yu
In this paper, we claim that the syntactic and semantic structures among natural language are critical for non-autoregressive machine translation and can further improve the performance.
no code implementations • 1 Jan 2021 • Wenting Zhao, Yuan Fang, Zhen Cui, Tong Zhang, Jian Yang, Wei Liu
In this paper, we propose a simple yet effective graph deformer network (GDN) to fulfill anisotropic convolution filtering on graphs, analogous to the standard convolution operation on images.
no code implementations • 20 Aug 2020 • Guangshuai Gao, Wenting Zhao, Qingjie Liu, Yunhong Wang
Co-saliency detection aims to detect common salient objects from a group of relevant images.
no code implementations • 28 Nov 2019 • Xueya Zhang, Tong Zhang, Wenting Zhao, Zhen Cui, Jian Yang
Graph convolutional networks (GCNs) have shown the powerful ability in text structure representation and effectively facilitate the task of text classification.
no code implementations • 25 Sep 2019 • Di Chen, Yiwei Bai, Wenting Zhao, Sebastian Ament, John M. Gregoire, Carla P. Gomes
We introduce Deep Reasoning Networks (DRNets), an end-to-end framework that combines deep learning with reasoning for solving pattern de-mixing problems, typically in an unsupervised or weakly-supervised setting.
no code implementations • 3 Jun 2019 • Di Chen, Yiwei Bai, Wenting Zhao, Sebastian Ament, John M. Gregoire, Carla P. Gomes
At a high level, DRNets encode a structured latent space of the input data, which is constrained to adhere to prior knowledge by a reasoning module.
no code implementations • 7 Jul 2018 • Wenting Zhao, Chunyan Xu, Zhen Cui, Tong Zhang, Jiatao Jiang, Zhen-Yu Zhang, Jian Yang
In this paper, we aim to give a comprehensive analysis of when work matters by transforming different classical network structures to graph CNN, particularly in the basic graph recognition problem.
Ranked #3 on Graph Classification on IMDb-B