no code implementations • 16 Oct 2023 • Yingwei Ma, Yue Yu, Shanshan Li, Yu Jiang, Yong Guo, Yuanliang Zhang, Yutao Xie, Xiangke Liao
Meanwhile, while traditional techniques leveraging such semantic information require complex static or dynamic code analysis to obtain features such as data flow and control flow, SeCoT demonstrates that this process can be fully automated via the intrinsic capabilities of LLMs (i. e., in-context learning), while being generalizable and applicable to challenging domains.
no code implementations • 4 Jul 2023 • Rui Wang, Zhiming Zhou, Tao Zhang, Ling Wang, Xin Xu, Xiangke Liao, Kaiwen Li
The proposed model, which combines the graph neural network and the pointer mechanism, can effectively map from the solver state to the branching variable decisions.
1 code implementation • 28 Mar 2023 • Deze Wang, Boxing Chen, Shanshan Li, Wei Luo, Shaoliang Peng, Wei Dong, Xiangke Liao
To alleviate the potentially catastrophic forgetting issue in multilingual models, we fix all pre-trained model parameters, insert the parameter-efficient structure adapter, and fine-tune it.
no code implementations • 7 Dec 2022 • Jiangsu Du, Dongsheng Li, Yingpeng Wen, Jiazhi Jiang, Dan Huang, Xiangke Liao, Yutong Lu
In this paper, we propose a scalable evaluation methodology (SAIH) for analyzing the AI performance trend of HPC systems with scaling the problem sizes of customized AI applications.
1 code implementation • 4 Dec 2021 • Deze Wang, Zhouyang Jia, Shanshan Li, Yue Yu, Yun Xiong, Wei Dong, Xiangke Liao
In this paper, we propose an approach to bridge pre-trained models and code-related tasks.
no code implementations • 21 Jun 2021 • Enda Yu, Dezun Dong, Yemao Xu, Shuo Ouyang, Xiangke Liao
Communication overhead is the key challenge for distributed training.
1 code implementation • 24 Mar 2021 • Chen Zeng, Yue Yu, Shanshan Li, Xin Xia, Zhiming Wang, Mingyang Geng, Bailin Xiao, Wei Dong, Xiangke Liao
With the rapid increase in the amount of public code repositories, developers maintain a great desire to retrieve precise code snippets by using natural language.
1 code implementation • 14 May 2020 • Yemao Xu, Dezun Dong, Weixia Xu, Xiangke Liao
To scale out to achieve faster training speed, two update algorithms are mainly applied in the distributed training process, i. e. the Synchronous SGD algorithm (SSGD) and Asynchronous SGD algorithm (ASGD).