no code implementations • ICML 2020 • Runxue Bao, Bin Gu, Heng Huang
Ordered Weight $L_{1}$-Norms (OWL) is a new family of regularizers for high-dimensional sparse regression.
1 code implementation • 10 May 2024 • Nan Zhang, Yanchi Liu, Xujiang Zhao, Wei Cheng, Runxue Bao, Rui Zhang, Prasenjit Mitra, Haifeng Chen
Moreover, by efficiently approximating weight importance with the refined training loss on a domain-specific calibration dataset, we obtain a pruned model emphasizing generality and specificity.
1 code implementation • 21 Mar 2024 • Xidong Wu, Shangqian Gao, Zeyu Zhang, Zhenzhen Li, Runxue Bao, yanfu Zhang, Xiaoqian Wang, Heng Huang
Current techniques for deep neural network (DNN) pruning often involve intricate multi-step processes that require domain-specific expertise, making their widespread adoption challenging.
no code implementations • 18 Feb 2024 • Fali Wang, Runxue Bao, Suhang Wang, Wenchao Yu, Yanchi Liu, Wei Cheng, Haifeng Chen
Though Large Language Models (LLMs) have shown remarkable open-generation capabilities across diverse domains, they struggle with knowledge-intensive tasks.
no code implementations • 3 Feb 2024 • Yiming Sun, Yuhe Gao, Runxue Bao, Gregory F. Cooper, Jessi Espino, Harry Hochheiser, Marian G. Michaels, John M. Aronis, Chenxi Song, Ye Ye
Transfer learning has become a pivotal technique in machine learning and has proven to be effective in various real-world applications.
1 code implementation • 12 Oct 2023 • Runxue Bao, Yiming Sun, Yuhe Gao, Jindong Wang, Qiang Yang, Haifeng Chen, Zhi-Hong Mao, Ye Ye
These methods typically presuppose identical feature spaces and label spaces in both domains, known as homogeneous transfer learning, which, however, is not always a practical assumption.
no code implementations • 29 Jun 2023 • Yuelyu Ji, Yuhe Gao, Runxue Bao, Qi Li, Disheng Liu, Yiming Sun, Ye Ye
Results showed that the Multi-DANN models outperformed the Single-DANN models and baseline models in predicting revisits of COVID-19 patients to the ER within 7 days after discharge.
no code implementations • 17 Aug 2022 • Jason Xiaotian Dou, Alvin Qingkai Pan, Runxue Bao, Haiyi Harry Mao, Lei Luo, Zhi-Hong Mao
Due to the growth of large datasets and model complexity, we want to learn and adapt the sampling process while training a representation.
no code implementations • 11 Aug 2022 • Runxue Bao, Bin Gu, Heng Huang
To address this challenge, we propose a novel accelerated doubly stochastic gradient descent (ADSGD) method for sparsity regularized loss minimization problems, which can reduce the number of block iterations by eliminating inactive coefficients during the optimization process and eventually achieve faster explicit model identification and improve the algorithm efficiency.
no code implementations • 23 Apr 2022 • Runxue Bao, Xidong Wu, Wenhan Xian, Heng Huang
To the best of our knowledge, this is the first work of distributed safe dynamic screening method.
1 code implementation • 29 Jun 2020 • Runxue Bao, Bin Gu, Heng Huang
Moreover, we prove that the algorithms with our screening rule are guaranteed to have identical results with the original algorithms.