no code implementations • 25 Apr 2024 • Jiachen Liu, Zhiyu Wu, Jae-Won Chung, Fan Lai, Myungjin Lee, Mosharaf Chowdhury
The advent of large language models (LLMs) has transformed text-based services, enabling capabilities ranging from real-time translation to AI-driven chatbots.
no code implementations • 21 Apr 2024 • Yuxuan Zhu, Jiachen Liu, Mosharaf Chowdhury, Fan Lai
Federated learning (FL) aims to train machine learning (ML) models across potentially millions of edge client devices.
no code implementations • 10 Apr 2024 • Jae-Won Chung, Mosharaf Chowdhury
The enormous energy consumption of machine learning (ML) and generative AI workloads shows no sign of waning, taking a toll on operating costs, power delivery, and environmental sustainability.
no code implementations • 13 Dec 2023 • Jiachen Liu, Fan Lai, Ding Ding, Yiwen Zhang, Mosharaf Chowdhury
Scheduling edge resources among multiple FL jobs is different from GPU scheduling for cloud ML because of the ephemeral nature and planetary scale of participating devices as well as the overlapping resource requirements of diverse FL jobs.
2 code implementations • 12 Dec 2023 • Jae-Won Chung, Yile Gu, Insu Jang, Luoxi Meng, Nikhil Bansal, Mosharaf Chowdhury
Training large AI models on numerous GPUs consumes a massive amount of energy.
3 code implementations • 6 Dec 2023 • Zhongwei Wan, Xin Wang, Che Liu, Samiul Alam, Yu Zheng, Jiachen Liu, Zhongnan Qu, Shen Yan, Yi Zhu, Quanlu Zhang, Mosharaf Chowdhury, Mi Zhang
Large Language Models (LLMs) have demonstrated remarkable capabilities in important tasks such as natural language understanding, language generation, and complex reasoning and have the potential to make a substantial impact on our society.
1 code implementation • 15 Sep 2023 • Insu Jang, Zhenning Yang, Zhen Zhang, Xin Jin, Mosharaf Chowdhury
Oobleck enables resilient distributed training of large DNN models with guaranteed fault tolerance.
1 code implementation • 4 Mar 2023 • Zhenning Yang, Luoxi Meng, Jae-Won Chung, Mosharaf Chowdhury
Specifically, our solution observes real-time carbon intensity shifts during training and controls the energy consumption of GPUs, thereby reducing carbon footprint while maintaining training performance.
no code implementations • 24 Feb 2023 • Ewen Wang, Ajay Kannan, Yuefeng Liang, Boyi Chen, Mosharaf Chowdhury
Cross-device federated learning (FL) has been well-studied from algorithmic, system scalability, and training speed perspectives.
no code implementations • 26 Dec 2022 • Pierre Tholoniat, Kelly Kostopoulou, Mosharaf Chowdhury, Asaf Cidon, Roxana Geambasu, Mathias Lécuyer, Junfeng Yang
This DP budget can be regarded as a new type of compute resource in workloads of multiple ML models training on user data.
no code implementations • 29 Oct 2022 • Jiachen Liu, Fan Lai, Yinwei Dai, Aditya Akella, Harsha Madhyastha, Mosharaf Chowdhury
In this paper, we explore an additional layer of complexity to mitigate such heterogeneity by grouping clients with statistically similar data distributions (cohorts).
1 code implementation • 12 Aug 2022 • Jie You, Jae-Won Chung, Mosharaf Chowdhury
In this paper, we observe that common practices to improve training performance can often lead to inefficient energy usage.
no code implementations • 9 Jun 2022 • Sanjay Sri Vallabh Singapuram, Fan Lai, Chuheng Hu, Mosharaf Chowdhury
The need to train DNN models on end-user devices (e. g., smartphones) is increasing with the need to improve data privacy and reduce communication overheads.
no code implementations • 17 Jan 2022 • Yiding Wang, Decang Sun, Kai Chen, Fan Lai, Mosharaf Chowdhury
To explore this, we first introduce the notion of training plasticity to quantify the training progress of internal DNN layers.
no code implementations • 6 Jan 2022 • Thomas Anderson, Adam Belay, Mosharaf Chowdhury, Asaf Cidon, Irene Zhang
The end of Dennard scaling and the slowing of Moore's Law has put the energy use of datacenters on an unsustainable path.
no code implementations • 9 Nov 2021 • Raed Kontar, Naichen Shi, Xubo Yue, Seokhyun Chung, Eunshin Byon, Mosharaf Chowdhury, Judy Jin, Wissam Kontar, Neda Masoud, Maher Noueihed, Chinedum E. Okwudire, Garvesh Raskutti, Romesh Saigal, Karandeep Singh, Zhisheng Ye
The Internet of Things (IoT) is on the verge of a major paradigm shift.
1 code implementation • 21 Jul 2021 • Naichen Shi, Fan Lai, Raed Al Kontar, Mosharaf Chowdhury
In this paper we propose Fed-ensemble: a simple approach that bringsmodel ensembling to federated learning (FL).
3 code implementations • 24 May 2021 • Fan Lai, Yinwei Dai, Sanjay S. Singapuram, Jiachen Liu, Xiangfeng Zhu, Harsha V. Madhyastha, Mosharaf Chowdhury
We present FedScale, a federated learning (FL) benchmarking suite with realistic datasets and a scalable runtime to enable reproducible FL research.
1 code implementation • 12 Oct 2020 • Fan Lai, Xiangfeng Zhu, Harsha V. Madhyastha, Mosharaf Chowdhury
In this paper, we propose Oort to improve the performance of federated training and testing with guided participant selection.
1 code implementation • 12 Feb 2019 • Peifeng Yu, Mosharaf Chowdhury
We show that these primitives can then be used to implement flexible sharing policies such as fairness, prioritization, and packing for various use cases.
no code implementations • 16 May 2016 • Anand Padmanabha Iyer, Ion Stoica, Mosharaf Chowdhury, Li Erran Li
Our choice of this domain is influenced by its commonalities with several other domains that produce real-time data, our access to a large live dataset, and their real-time nature and dimensionality which makes it a natural fit for a popular analysis technique, machine learning (ML).