no code implementations • COLING 2022 • Keli Xie, Dongchen He, Jiaxin Zhuang, Siyuan Lu, Zhongfeng Wang
To better capture the dialogue information, we propose a 2D view of dialogue based on a time-speaker perspective, where the time and speaker streams of dialogue can be obtained as strengthened input.
no code implementations • 16 Nov 2022 • Siyuan Lu, Chenchen Zhou, Keli Xie, Jun Lin, Zhongfeng Wang
Based on ELBERT, an innovative method to accelerate text processing on the GPU platform is developed, solving the difficult problem of making the early exit mechanism work more effectively with a large input batch size.
no code implementations • 1 Jul 2021 • Keli Xie, Siyuan Lu, Meiqi Wang, Zhongfeng Wang
Despite the great success in Natural Language Processing (NLP) area, large pre-trained language models like BERT are not well-suited for resource-constrained or real-time applications owing to the large number of parameters and slow inference speed.