no code implementations • 17 Apr 2024 • Minghe Gao, Shuang Chen, Liang Pang, Yuan YAO, Jisheng Dang, Wenqiao Zhang, Juncheng Li, Siliang Tang, Yueting Zhuang, Tat-Seng Chua
Their ability to execute intricate compositional reasoning tasks is also constrained, culminating in a stagnation of learning progression for these models.
1 code implementation • 22 Feb 2024 • Shuang Chen, Amir Atapour-Abarghouei, Hubert P. H. Shum
In this paper, we propose an end-to-end High-quality INpainting Transformer, abbreviated as HINT, which consists of a novel mask-aware pixel-shuffle downsampling module (MPD) to preserve the visible information extracted from the corrupted image while maintaining the integrity of the information available for high-level inferences made within the model.
no code implementations • 17 May 2023 • Shuang Chen, Amir Atapour-Abarghouei, Edmond S. L. Ho, Hubert P. H. Shum
We present a software that predicts non-cleft facial images for patients with cleft lip, thereby facilitating the understanding, awareness and discussion of cleft lip surgeries.
1 code implementation • 1 Aug 2022 • Shuang Chen, Amir Atapour-Abarghouei, Jane Kerby, Edmond S. L. Ho, David C. G. Sainsbury, Sophie Butterworth, Hubert P. H. Shum
A Cleft lip is a congenital abnormality requiring surgical repair by a specialist.
no code implementations • 14 Apr 2022 • Carina Negreanu, Alperen Karaoglu, Jack Williams, Shuang Chen, Daniel Fabian, Andrew Gordon, Chin-Yew Lin
The task divides into two steps: subject suggestion, the task of populating the main column; and gap filling, the task of populating the remaining columns.
1 code implementation • ACL 2021 • Shuang Chen, Qian Liu, Zhiwei Yu, Chin-Yew Lin, Jian-Guang Lou, Feng Jiang
We present Retriever-Transducer-Checker (ReTraCk), a neural semantic parsing framework for large scale knowledge base question answering (KBQA).
Ranked #1 on Knowledge Base Question Answering on GrailQA
no code implementations • 6 Jan 2020 • Shuang Chen, Jinpeng Wang, Feng Jiang, Chin-Yew Lin
Existing state of the art neural entity linking models employ attention-based bag-of-words context model and pre-trained entity embeddings bootstrapped from word embeddings to assess topic level context compatibility.
Ranked #2 on Entity Disambiguation on AIDA-CoNLL (Micro-F1 metric)
no code implementations • IJCNLP 2019 • Shuang Chen, Jinpeng Wang, Xiaocheng Feng, Feng Jiang, Bing Qin, Chin-Yew Lin
Recent neural models for data-to-text generation rely on massive parallel pairs of data and text to learn the writing knowledge.