1 code implementation • 10 Oct 2023 • Shuaichen Chang, Eric Fosler-Lussier
Large language models (LLMs) with in-context learning have demonstrated impressive generalization capabilities in the cross-domain text-to-SQL task, without the use of in-domain annotations.
1 code implementation • 19 May 2023 • Shuaichen Chang, Eric Fosler-Lussier
Large language models (LLMs) with in-context learning have demonstrated remarkable capability in the text-to-SQL task.
2 code implementations • 21 Jan 2023 • Shuaichen Chang, Jun Wang, Mingwen Dong, Lin Pan, Henghui Zhu, Alexander Hanbo Li, Wuwei Lan, Sheng Zhang, Jiarong Jiang, Joseph Lilien, Steve Ash, William Yang Wang, Zhiguo Wang, Vittorio Castelli, Patrick Ng, Bing Xiang
Neural text-to-SQL models have achieved remarkable performance in translating natural language questions into SQL queries.
1 code implementation • 15 Nov 2022 • Shuaichen Chang, David Palzer, Jialin Li, Eric Fosler-Lussier, Ningchuan Xiao
Our experimental results show that V-MODEQA has better overall performance and robustness on MapQA than the state-of-the-art ChartQA and VQA algorithms by capturing the unique properties in map question answering.
no code implementations • 15 Sep 2021 • Naihao Deng, Shuaichen Chang, Peng Shi, Tao Yu, Rui Zhang
Existing text-to-SQL research only considers complete questions as the input, but lay-users might strive to formulate a complete question.
1 code implementation • 23 Oct 2020 • Yusen Zhang, Xiangyu Dong, Shuaichen Chang, Tao Yu, Peng Shi, Rui Zhang
Neural models have achieved significant results on the text-to-SQL task, in which most current work assumes all the input questions are legal and generates a SQL query for any input.
1 code implementation • 29 Aug 2019 • Shuaichen Chang, PengFei Liu, Yun Tang, Jing Huang, Xiaodong He, Bo-Wen Zhou
Recent years have seen great success in the use of neural seq2seq models on the text-to-SQL task.
no code implementations • 21 Nov 2018 • Pengfei Liu, Shuaichen Chang, Xuanjing Huang, Jian Tang, Jackie Chi Kit Cheung
Recently, a large number of neural mechanisms and models have been proposed for sequence learning, of which self-attention, as exemplified by the Transformer model, and graph neural networks (GNNs) have attracted much attention.