Search Results for author: Jie Zuo

Found 2 papers, 2 papers with code

MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Experts

1 code implementation22 Apr 2024 Dengchun Li, Yingzi Ma, Naizheng Wang, Zhengmao Ye, Zhiyuan Cheng, Yinghao Tang, Yan Zhang, Lei Duan, Jie Zuo, Cal Yang, Mingjie Tang

We also propose a new high-throughput framework to alleviate the computation and memory bottlenecks during the training and inference of MOE models.

Multi-Task Learning Quantization

ASPEN: High-Throughput LoRA Fine-Tuning of Large Language Models with a Single GPU

1 code implementation5 Dec 2023 Zhengmao Ye, Dengchun Li, Jingqi Tian, Tingfeng Lan, Jie Zuo, Lei Duan, Hui Lu, Yexi Jiang, Jian Sha, Ke Zhang, Mingjie Tang

Transformer-based large language models (LLMs) have demonstrated outstanding performance across diverse domains, particularly when fine-turned for specific domains.

Large Language Model Scheduling

Cannot find the paper you are looking for? You can Submit a new open access paper.