no code implementations • 29 Feb 2024 • Xiaoqiang Wang, Bang Liu, Lingfei Wu
In this paper, we present FAC$^2$E, a framework for Fine-grAined and Cognition-grounded LLMs' Capability Evaluation.
no code implementations • 8 May 2023 • Xiaoqiang Wang, Bang Liu, Siliang Tang, Lingfei Wu
We present $\textbf{$\texttt{SkillQG}$}$: a question generation framework with controllable comprehension types for assessing and improving machine reading comprehension models.
no code implementations • 22 Feb 2023 • Xiaoqiang Wang, Yanqing Liu, Jinyu Li, Sheng Zhao
To solve above limitations, in this paper we propose an improved non-autoregressive (NAR) spelling correction model for contextual biasing in E2E neural transducer-based ASR systems to improve the previous CSC model from two perspectives: Firstly, we incorporate acoustics information with an external attention as well as text hypotheses into CSC to better distinguish target phrase from dissimilar or irrelevant phrases.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 28 Jun 2022 • Dacheng Yin, Chuanxin Tang, Yanqing Liu, Xiaoqiang Wang, Zhiyuan Zhao, Yucheng Zhao, Zhiwei Xiong, Sheng Zhao, Chong Luo
In the proposed paradigm, global and local factors in speech are explicitly decomposed and separately manipulated to achieve high speaker similarity and continuous prosody.
no code implementations • 29 Apr 2022 • Xiaoqiang Wang, Bang Liu, Siliang Tang, Lingfei Wu
Existing metrics for assessing question generation not only require costly human reference but also fail to take into account the input context of generation, rendering the lack of deep understanding of the relevance between the generated questions and input contexts.
no code implementations • ACL 2022 • Xiaoqiang Wang, Bang Liu, Fangli Xu, Bo Long, Siliang Tang, Lingfei Wu
In this paper, we argue that a deep understanding of model capabilities and data properties can help us feed a model with appropriate training data based on its learning status.
1 code implementation • 2 Mar 2022 • Xiaoqiang Wang, Yanqing Liu, Jinyu Li, Veljko Miljanic, Sheng Zhao, Hosam Khalil
In this work, we introduce a novel approach to do contextual biasing by adding a contextual spelling correction model on top of the end-to-end ASR system.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
1 code implementation • 1 Jan 2022 • Xiaoqiang Wang, Lei Zhu, Siliang Tang, Huazhu Fu, Ping Li, Fei Wu, Yi Yang, Yueting Zhuang
The depth estimation branch is trained with RGB-D images and then used to estimate the pseudo depth maps for all unlabeled RGB images to form the paired data.
no code implementations • 17 Aug 2021 • Xiaoqiang Wang, Yanqing Liu, Sheng Zhao, Jinyu Li
We incorporate the context information into the spelling correction model with a shared context encoder and use a filtering algorithm to handle large-size context lists.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
1 code implementation • 14 May 2021 • Xiaoqiang Wang, Yali Du, Shengyu Zhu, Liangjun Ke, Zhitang Chen, Jianye Hao, Jun Wang
It is a long-standing question to discover causal relations among a set of variables in many empirical sciences.
no code implementations • 3 Oct 2019 • Xiaoqiang Wang, Lejia Gu, Joseph Heung-wing Joseph Lee, Guofeng Zhang
In this paper, we present a quantum singular value decomposition algorithm for third-order tensors inspired by the classical algorithm of tensor singular value decomposition (t-svd) and then extend it to order-$p$ tensors.
no code implementations • 10 Aug 2019 • Xiaoqiang Wang, Liangjun Ke, Zhimin Qiao, Xinghua Chai
In this paper, a new MARL, called Cooperative double Q-learning (Co-DQL), is proposed, which has several prominent features.