no code implementations • 5 Mar 2024 • Zhongdong Liu, Keyuan Zhang, Bin Li, Yin Sun, Y. Thomas Hou, Bo Ji
To address this challenge, we develop a robust online algorithm to minimize the sum of transmission and staleness costs, ensuring a worst-case performance guarantee.
1 code implementation • 9 Feb 2024 • Keyuan Zhang, Zhongdong Liu, Nakjung Choi, Bo Ji
In this paper, we study the two-level ski-rental problem, where a user needs to fulfill a sequence of demands for multiple items by choosing one of the three payment options: paying for the on-demand usage (i. e., rent), buying individual items (i. e., single purchase), and buying all the items (i. e., combo purchase).
no code implementations • 16 Jun 2023 • Duo Cheng, Xingyu Zhou, Bo Ji
To design algorithms that can achieve the minimax regret, it is instructive to consider a more general setting where the learner has a budget of $B$ total observations.
no code implementations • 28 Jan 2023 • Fengjiao Li, Xingyu Zhou, Bo Ji
This problem is motivated by several real-world applications (such as dynamic pricing, cellular network configuration, and policy making), where users from a large population contribute to the reward of the action chosen by a central entity, but it is difficult to collect feedback from all users.
no code implementations • 17 Dec 2022 • Bo Ji, Tianyi Chen
Convolution neural networks (CNNs) have achieved remarkable success, but typically accompany high computation cost and numerous redundant weight parameters.
no code implementations • 29 Aug 2022 • Daniel Minati, Ludwik Sams, Karen Li, Bo Ji, Krishna Vardhan
Respiratory Measurements such as minute ventilation can be used in correlation with other physiological measurements such as heart rate and heart rate variability for remote monitoring of health and detecting symptoms of such breathing related disorders.
1 code implementation • 5 Aug 2022 • Yuehan Zhang, Bo Ji, Jia Hao, Angela Yao
In image super-resolution, both pixel-wise accuracy and perceptual fidelity are desirable.
no code implementations • 12 Jul 2022 • Fengjiao Li, Xingyu Zhou, Bo Ji
To tackle this problem, we consider differentially private distributed linear bandits, where only a subset of users from the population are selected (called clients) to participate in the learning process and the central server learns the global model from such partial feedback by iteratively aggregating these clients' local feedback in a differentially private fashion.
no code implementations • 28 Apr 2022 • Shaohui Lin, Bo Ji, Rongrong Ji, Angela Yao
Multi-exit architectures consist of a backbone and branch classifiers that offer shortened inference pathways to reduce the run-time of deep neural networks.
1 code implementation • CVPR 2022 • Bo Ji, Angela Yao
Video deblurring has achieved remarkable progress thanks to the success of deep neural networks.
Ranked #3 on Analog Video Restoration on TAPE
no code implementations • 29 Mar 2022 • Xingyu Zhou, Bo Ji
Our ultimate goal is to study how to utilize the nature of soft constraints to attain a finer complexity-regret-constraint trade-off in the kernelized bandit setting.
no code implementations • 10 Jan 2022 • Yanlong Qiu, Jiaxi Zhang, Yanjiao Chen, Jin Zhang, Bo Ji
Specifically, we propose a novel \textit{Frequency Component Detection} method to detect the existence of mmWave signals, distinguish between mmWave radar and WiGig signals using a waveform classifier based on a convolutional neural network (CNN), and localize spy radars using triangulation based on the detector's observations at multiple anchor points.
no code implementations • 25 Jul 2021 • Fengjiao Li, Jia Liu, Bo Ji
Considering the achieved training accuracy of the global model as the utility of the selected workers, which is typically a monotone submodular function, we formulate the worker selection problem as a new multi-round monotone submodular maximization problem with cardinality and fairness constraints.
1 code implementation • NeurIPS 2021 • Tianyi Chen, Bo Ji, Tianyu Ding, Biyi Fang, Guanyi Wang, Zhihui Zhu, Luming Liang, Yixin Shi, Sheng Yi, Xiao Tu
Structured pruning is a commonly used technique in deploying deep neural networks (DNNs) onto resource-constrained devices.
no code implementations • 1 Jan 2021 • Tianyi Chen, Guanyi Wang, Tianyu Ding, Bo Ji, Sheng Yi, Zhihui Zhu
Optimizing with group sparsity is significant in enhancing model interpretability in machining learning applications, e. g., feature selection, compressed sensing and model compression.
no code implementations • NeurIPS 2020 • Gamal Sallam, Zizhan Zheng, Jie Wu, Bo Ji
Compared to robust submodular maximization for set function, new challenges arise when sequence functions are concerned.
no code implementations • 10 Nov 2020 • Tianyi Chen, Bo Ji, Yixin Shi, Tianyu Ding, Biyi Fang, Sheng Yi, Xiao Tu
The compression of deep neural networks (DNNs) to reduce inference cost becomes increasingly important to meet realistic deployment requirements of various applications.
1 code implementation • 7 Apr 2020 • Tianyi Chen, Tianyu Ding, Bo Ji, Guanyi Wang, Jing Tian, Yixin Shi, Sheng Yi, Xiao Tu, Zhihui Zhu
Sparsity-inducing regularization problems are ubiquitous in machine learning applications, ranging from feature selection to model compression.
no code implementations • 17 Dec 2019 • Fengjiao Li, Yu Sang, Zhongdong Liu, Bin Li, Huasen Wu, Bo Ji
Interestingly, we find that under this new Pull model, replication schemes capture a novel tradeoff between different values of the AoI across the servers (due to the random updating processes) and different response times across the servers, which can be exploited to minimize the expected AoI at the user's side.
1 code implementation • 27 Jul 2019 • Bo Ji, Tianyi Chen
The main features of the new framework include: (i) A discriminator consists of an integrated CNN-Long-Short-Term- Memory (LSTM) based feature extraction with Path Signature Features (PSF) as input and a Feedforward Neural Network (FNN) based binary classifier; (ii) A recurrent latent variable model as generator for synthesizing sequential handwritten data.
no code implementations • 15 Jan 2019 • Fengjiao Li, Jia Liu, Bo Ji
To tackle this new problem, we extend an online learning algorithm, UCB, to deal with a critical tradeoff between exploitation and exploration and employ the virtual queue technique to properly handle the fairness constraints.
no code implementations • 13 Jan 2019 • Gamal Sallam, Bo Ji
With the advent of Network Function Virtualization (NFV), network services that traditionally run on proprietary dedicated hardware can now be realized using Virtual Network Functions (VNFs) that are hosted on general-purpose commodity hardware.
Networking and Internet Architecture Data Structures and Algorithms
no code implementations • 17 Apr 2017 • Yu Sang, Bin Li, Bo Ji
Interestingly, we find that under this new Pull model, replication schemes capture a novel tradeoff between different levels of information freshness and different response times across the servers, which can be exploited to minimize the expected AoI at the user's side.
Networking and Internet Architecture