no code implementations • EMNLP 2021 • Xinmeng Li, Qian Li, Wansen Wu, Quanjun Yin
Recently, the focus of dialogue state tracking has expanded from single domain to multiple domains.
no code implementations • 4 Feb 2024 • Zhengqiu Zhu, Yong Zhao, Bin Chen, Sihang Qiu, Kai Xu, Quanjun Yin, Jincai Huang, Zhong Liu, Fei-Yue Wang
The transition from CPS-based Industry 4. 0 to CPSS-based Industry 5. 0 brings new requirements and opportunities to current sensing approaches, especially in light of recent progress in Chatbots and Large Language Models (LLMs).
no code implementations • 25 Jan 2024 • Mengyao Du, Miao Zhang, Yuwen Pu, Kai Xu, Shouling Ji, Quanjun Yin
To tackle the scarcity and privacy issues associated with domain-specific datasets, the integration of federated learning in conjunction with fine-tuning has emerged as a practical solution.
no code implementations • 29 Nov 2023 • Ting Liu, Yue Hu, Wansen Wu, Youkai Wang, Kai Xu, Quanjun Yin
Then we introduce soft visual prompts in the input space of the visual encoder in a pretrained model.
no code implementations • 8 Oct 2023 • Qinglun Li, Miao Zhang, Nan Yin, Quanjun Yin, Li Shen
To further improve algorithm performance and alleviate local heterogeneous overfitting in Federated Learning (FL), our algorithm combines the Sharpness Aware Minimization (SAM) optimizer and local momentum.
no code implementations • 7 Sep 2023 • Ting Liu, Yue Hu, Wansen Wu, Youkai Wang, Kai Xu, Quanjun Yin
In the indoor-aware stage, we apply an efficient tuning paradigm to learn deep visual prompts from an indoor dataset, in order to augment pretrained models with inductive biases towards indoor environments.
no code implementations • 16 Aug 2023 • Qinglun Li, Li Shen, Guanghao Li, Quanjun Yin, DaCheng Tao
To address the communication burden issues associated with federated learning (FL), decentralized federated learning (DFL) discards the central server and establishes a decentralized communication network, where each client communicates only with neighboring clients.
no code implementations • 21 Jun 2022 • Guanghao Li, Yue Hu, Miao Zhang, Ji Liu, Quanjun Yin, Yong Peng, Dejing Dou
As the efficiency of training in the ring topology prefers devices with homogeneous resources, the classification based on the computing capacity mitigates the impact of straggler effects.
no code implementations • 3 Aug 2021 • Xinmeng Li, Wansen Wu, Long Qin, Quanjun Yin
Evaluating the quality of a dialogue system is an understudied problem.
no code implementations • 27 Apr 2021 • Xinmeng Li, Mamoun Alazab, Qian Li, Keping Yu, Quanjun Yin
We evaluate QA2MN on PathQuestion and WorldCup2014, two representative datasets for complex multi-hop question answering.
no code implementations • 5 Nov 2018 • Junjie Zeng, Long Qin, Yue Hu, Cong Hu, Quanjun Yin
The first advantage of the proposed method is that SSG can solve the limitations of sparse reward and local minima trap for RL agents; thus, LSPI can be used to generate paths in complex environments.