no code implementations • 14 Apr 2024 • Xin-Chun Li, Shaoming Song, Yinchuan Li, Bingshuai Li, Yunfeng Shao, Yang Yang, De-Chuan Zhan
For better model personalization, we point out that the hard-won personalized models are not well exploited and propose "inherited private model" to store the personalization experience.
no code implementations • 23 Feb 2024 • Nader Asadi, Mahdi Beitollahi, Yasser Khalil, Yinchuan Li, Guojun Zhang, Xi Chen
Parameter-efficient fine-tuning stands as the standard for efficiently fine-tuning large language and vision models on downstream tasks.
no code implementations • 7 Feb 2024 • Yinchuan Li, Yuancheng Zhan, Le Zheng, Xiaodong Wang
Different from traditional compressed sensing (CS) methods that only use the sparsity of user activities, we develop several Approximate Message Passing (AMP) based CS algorithms by exploiting the sparsity of user activities and mmWave channels.
no code implementations • 30 Dec 2023 • Peihua Mai, Ran Yan, Rui Ye, Youjia Yang, Yinchuan Li, Yan Pang
In response, we present ConfusionPrompt, a novel private LLM inference framework designed to obfuscate the server by: (i) decomposing the prompt into sub-prompts, and (ii) generating pseudo prompts along with the genuine sub-prompts as input to the online LLM.
no code implementations • 23 Dec 2023 • Leo Maxime Brunswic, Yinchuan Li, Yushun Xu, Shangling Jui, Lizhuang Ma
GFlowNets is a novel flow-based method for learning a stochastic policy to generate objects via a sequence of actions and with probability proportional to a given positive reward.
no code implementations • 28 Jun 2023 • Didi Zhu, Zexi Li, Min Zhang, Junkun Yuan, Yunfeng Shao, Jiashuo Liu, Kun Kuang, Yinchuan Li, Chao Wu
It is found that NC optimality of text-to-image representations shows a positive correlation with downstream generalizability, which is more severe under class imbalance settings.
no code implementations • 16 Jun 2023 • Xinyuan Ji, Xu Zhang, Wei Xi, Haozhi Wang, Olga Gadyatskaya, Yinchuan Li
Multi-task reinforcement learning and meta-reinforcement learning have been developed to quickly adapt to new tasks, but they tend to focus on tasks with higher rewards and more frequent occurrences, leading to poor performance on tasks with sparse rewards.
no code implementations • 11 May 2023 • Yinchuan Li, Shuang Luo, Yunfeng Shao, Jianye Hao
We propose the GFlowNets with Human Feedback (GFlowHF) framework to improve the exploration ability when training AI models.
no code implementations • 8 May 2023 • Didi Zhu, Yinchuan Li, Yunfeng Shao, Jianye Hao, Fei Wu, Kun Kuang, Jun Xiao, Chao Wu
We introduce a new problem in unsupervised domain adaptation, termed as Generalized Universal Domain Adaptation (GUDA), which aims to achieve precise prediction of all target labels including unknown categories.
no code implementations • 24 Apr 2023 • Yinchuan Li, Zhigang Li, Wenqian Li, Yunfeng Shao, Yan Zheng, Jianye Hao
Many score-based active learning methods have been successfully applied to graph-structured data, aiming to reduce the number of labels and achieve better performance of graph neural networks based on predefined score functions.
no code implementations • 12 Apr 2023 • Haozhi Wang, Yinchuan Li, Qing Wang, Yunfeng Shao, Jianye Hao
We then define an adjacency space for mismatched states and design a plug-and-play module for value iteration, which enables agents to infer more precise returns.
no code implementations • 8 Mar 2023 • Xu Zhang, Wenpeng Li, Yunfeng Shao, Yinchuan Li
data, we propose a clustered Bayesian FL model named cFedbayes by learning different prior distributions for different clients.
1 code implementation • 4 Mar 2023 • Yinchuan Li, Shuang Luo, Haozhi Wang, Jianye Hao
Generative flow networks (GFlowNets), as an emerging technique, can be used as an alternative to reinforcement learning for exploratory control tasks.
1 code implementation • 4 Mar 2023 • Wenqian Li, Yinchuan Li, Zhigang Li, Jianye Hao, Yan Pang
Uncovering rationales behind predictions of graph neural networks (GNNs) has received increasing attention over the years.
2 code implementations • 15 Oct 2022 • Wenqian Li, Yinchuan Li, Shengyu Zhu, Yunfeng Shao, Jianye Hao, Yan Pang
Causal discovery aims to uncover causal structure among a set of variables.
no code implementations • 10 Oct 2022 • Xin-Chun Li, Wen-Shu Fan, Shaoming Song, Yinchuan Li, Bingshuai Li, Yunfeng Shao, De-Chuan Zhan
Complex teachers tend to be over-confident and traditional temperature scaling limits the efficacy of {\it class discriminability}, resulting in less discriminative wrong class probabilities.
no code implementations • 21 Sep 2022 • Haozhi Wang, Qing Wang, Yunfeng Shao, Dong Li, Jianye Hao, Yinchuan Li
Modern meta-reinforcement learning (Meta-RL) methods are mainly developed based on model-agnostic meta-learning, which performs policy gradient steps across tasks to maximize policy performance.
no code implementations • 27 Aug 2022 • Qing Wang, Jing Jin, Xiaofeng Liu, Huixuan Zong, Yunfeng Shao, Yinchuan Li
Federated learning (FL) is a new distributed machine learning framework that can achieve reliably collaborative training without collecting users' private data.
no code implementations • 20 Jun 2022 • Shuang Luo, Yinchuan Li, Jiahui Li, Kun Kuang, Furui Liu, Yunfeng Shao, Chao Wu
To this end, we propose a sparse state based MARL (S2RL) framework, which utilizes a sparse attention mechanism to discard irrelevant information in local observations.
Multi-agent Reinforcement Learning Reinforcement Learning (RL) +2
1 code implementation • 17 Jun 2022 • Xin-Chun Li, Jin-Lin Tang, Shaoming Song, Bingshuai Li, Yinchuan Li, Yunfeng Shao, Le Gan, De-Chuan Zhan
Federated KWS (FedKWS) could serve as a solution without directly sharing users' data.
1 code implementation • 16 Jun 2022 • Xu Zhang, Yinchuan Li, Wenpeng Li, Kaiyang Guo, Yunfeng Shao
Federated learning faces huge challenges from model overfitting due to the lack of data and statistical diversity among clients.
no code implementations • CVPR 2022 • Xin-Chun Li, Yi-Chu Xu, Shaoming Song, Bingshuai Li, Yinchuan Li, Yunfeng Shao, De-Chuan Zhan
The permutation invariance property of neural networks and the non-i. i. d.
no code implementations • 25 Mar 2022 • Xiaofeng Liu, Qing Wang, Yunfeng Shao, Yinchuan Li
To this end, we propose a personalized FL algorithm using a hierarchical proximal mapping based on the moreau envelop, named sparse federated learning with hierarchical personalized models (sFedHP), which significantly improves the global model performance facing diverse data.
no code implementations • 23 Mar 2022 • Zexi Li, Jiaxun Lu, Shuang Luo, Didi Zhu, Yunfeng Shao, Yinchuan Li, Zhimeng Zhang, Yongheng Wang, Chao Wu
In the literature, centralized clustered FL algorithms require the assumption of the number of clusters and hence are not effective enough to explore the latent relationships among clients.
no code implementations • 12 Jul 2021 • Xiaofeng Liu, Yinchuan Li, Qing Wang, Xu Zhang, Yunfeng Shao, Yanhui Geng
By incorporating an approximated L1-norm and the correlation between client models and global model into standard FL loss function, the performance on statistical diversity data is improved and the communicational and computational loads required in the network are reduced compared with non-sparse FL.
no code implementations • 12 Jul 2021 • Yinchuan Li, Xiaofeng Liu, Yunfeng Shao, Qing Wang, Yanhui Geng
Structured pruning is an effective compression technique to reduce the computation of neural networks, which is usually achieved by adding perturbations to reduce network parameters at the cost of slightly increasing training loss.
no code implementations • 26 Sep 2020 • Jeremy Johnston, Yinchuan Li, Marco Lops, Xiaodong Wang
Complex ADMM-Net, a complex-valued neural network architecture inspired by the alternating direction method of multipliers (ADMM), is designed for interference removal in super-resolution stepped frequency radar angle-range-doppler imaging.
4 code implementations • 20 Dec 2019 • Xinyi Li, Yinchuan Li, Hongyang Yang, Liuqing Yang, Xiao-Yang Liu
In this paper, we propose a novel deep neural network DP-LSTM for stock price prediction, which incorporates the news articles as hidden information and integrates difference news sources through the differential privacy mechanism.
no code implementations • 3 Aug 2019 • Xinyi Li, Yinchuan Li, Xiao-Yang Liu, Christina Dan Wang
In this paper, we propose a novel deep neural network Mid-LSTM for midterm stock prediction, which incorporates the market trend as hidden states.
no code implementations • 21 Jun 2019 • Xinyi Li, Yinchuan Li, Yuancheng Zhan, Xiao-Yang Liu
Portfolio allocation is crucial for investment companies.
Statistical Finance