no code implementations • 30 Jan 2024 • Hao Zhang, Qingfeng Lin, Yang Li, Lei Cheng, Yik-Chung Wu
This problem is even more severe in cell-free networks as there are many of these parameters to be acquired.
1 code implementation • 14 Dec 2023 • Zhenrong Liu, Yang Li, Yi Gong, Yik-Chung Wu
This approach optimizes network parameters in the null space of the past tasks' feature representation matrix to guarantee the stability.
1 code implementation • 28 Jun 2023 • Wei-Bin Kou, Shuai Wang, Guangxu Zhu, Bin Luo, Yingxian Chen, Derrick Wing Kwan Ng, Yik-Chung Wu
While federated learning (FL) improves the generalization of end-to-end autonomous driving by model aggregation, the conventional single-hop FL (SFL) suffers from slow convergence rate due to long-range communications among vehicles and cloud server.
no code implementations • 19 Jun 2023 • Le Xu, Lei Cheng, Ngai Wong, Yik-Chung Wu, H. Vincent Poor
A probabilistic model is built to induce the common sparsity in the spatial domain, and the first-order Taylor expansion is adopted to get rid of the grid mismatch in the dictionaries.
1 code implementation • 19 Jun 2023 • Le Xu, Lei Cheng, Ngai Wong, Yik-Chung Wu
Tensor train (TT) representation has achieved tremendous success in visual data completion tasks, especially when it is combined with tensor folding.
no code implementations • 14 Dec 2022 • Yunqi Wang, Yang Li, Qingjiang Shi, Yik-Chung Wu
In order to achieve high data rate and ubiquitous connectivity in future wireless networks, a key task is to efficiently manage the radio resource by judicious beamforming and power allocation.
1 code implementation • 28 Nov 2022 • Yingxian Chen, Zhengzhe Liu, Baoheng Zhang, Wilton Fok, Xiaojuan Qi, Yik-Chung Wu
Weakly supervised detection of anomalies in surveillance videos is a challenging task.
Anomaly Detection In Surveillance Videos Video Anomaly Detection
no code implementations • 23 Nov 2022 • Yunqi Wang, Yang Li, Qingjiang Shi, Yik-Chung Wu
However, the current GNNs are only equipped with the node-update mechanism, which restricts it from modeling more complicated problems such as the cooperative beamforming design, where the beamformers are on the graph edges of wireless networks.
no code implementations • 4 May 2022 • Yizhen Yang, Yi Gong, Yik-Chung Wu
Mobile edge computing (MEC) is envisioned as a promising technique to support computation-intensive and timecritical applications in future Internet of Things (IoT) era.
no code implementations • 28 Apr 2022 • Zongze Li, Shuai Wang, Qingfeng Lin, Yang Li, Miaowen Wen, Yik-Chung Wu, H. Vincent Poor
Reconfigurable intelligent surfaces (RISs) have a revolutionary capability to customize the radio propagation environment for wireless networks.
no code implementations • 18 Mar 2022 • Yangge Chen, Lei Cheng, Yik-Chung Wu
Recently, there is a revival of interest in low-rank matrix completion-based unsupervised learning through the lens of dual-graph regularization, which has significantly improved the performance of multidisciplinary machine learning tasks such as recommendation systems, genotype imputation and image inpainting.
no code implementations • 31 Aug 2021 • Shuai Wang, Dachuan Li, Rui Wang, Qi Hao, Yik-Chung Wu, Derrick Wing Kwan Ng
Wireless federated learning (FL) is an emerging machine learning paradigm that trains a global parametric model from distributed datasets via wireless communications.
no code implementations • 2 Jun 2021 • Yunqi Wang, Furui Liu, Zhitang Chen, Qing Lian, Shoubo Hu, Jianye Hao, Yik-Chung Wu
Domain generalization aims to learn knowledge invariant across different distributions while semantically meaningful for downstream tasks from multiple source domains, to improve the model's generalization ability on unseen target domains.
1 code implementation • 28 Jan 2021 • Shuai Wang, Yuncong Hong, Rui Wang, Qi Hao, Yik-Chung Wu, Derrick Wing Kwan Ng
Simulation results show that the proposed UMAirComp framework with PAM algorithm achieves a smaller mean square error of model parameters' estimation, training loss, and test error compared with other benchmark schemes.
no code implementations • 13 Oct 2020 • Le Xu, Lei Cheng, Ngai Wong, Yik-Chung Wu
Tensor train (TT) decomposition, a powerful tool for analyzing multidimensional data, exhibits superior performance in many machine learning tasks.
no code implementations • 7 Sep 2020 • Dan Liu, Shuai Wang, Zhigang Wen, Lei Cheng, Miaowen Wen, Yik-Chung Wu
However, different devices may transmit different data for different machine learning jobs and a fundamental question is how to jointly plan the UGV path, the devices' energy consumption, and the number of samples for different jobs?
no code implementations • 5 Sep 2020 • Lei Cheng, Zhongtao Chen, Qingjiang Shi, Yik-Chung Wu, Sergios Theodoridis
However, the optimal determination of a tensor rank is known to be a non-deterministic polynomial-time hard (NP-hard) task.
no code implementations • 21 Jul 2020 • Shuai Wang, Rui Wang, Qi Hao, Yik-Chung Wu, H. Vincent Poor
While machine-type communication (MTC) devices generate massive data, they often cannot process this data due to limited energy and computation power.
no code implementations • 12 Nov 2019 • Shuai Wang, Yik-Chung Wu, Minghua Xia, Rui Wang, H. Vincent Poor
However, power allocation in this paradigm requires maximizing the learning performance instead of the communication throughput, for which the celebrated water-filling and max-min fairness algorithms become inefficient.
no code implementations • 12 Jun 2017 • Jian Du, Shaodan Ma, Yik-Chung Wu, Soummya Kar, José M. F. Moura
Gaussian belief propagation (BP) has been widely used for distributed inference in large-scale networks such as the smart grid, sensor networks, and social networks, where local measurements/observations are scattered over a wide geographical area.
no code implementations • 13 Apr 2017 • Jian Du, Shaodan Ma, Yik-Chung Wu, Soummya Kar, José M. F. Moura
Gaussian belief propagation (BP) has been widely used for distributed estimation in large-scale networks such as the smart grid, communication networks, and social networks, where local measurements/observations are scattered over a wide geographical area.
no code implementations • 7 Nov 2016 • Jian Du, Shaodan Ma, Yik-Chung Wu, Soummya Kar, José M. F. Moura
A necessary and sufficient convergence condition for the belief mean vector to converge to the optimal centralized estimator is provided under the assumption that the message information matrix is initialized as a positive semidefinite matrix.