no code implementations • FL4NLP (ACL) 2022 • Huili Chen, Jie Ding, Eric Tramel, Shuang Wu, Anit Kumar Sahu, Salman Avestimehr, Tao Zhang
Inspired by Bayesian hierarchical models, we develop ActPerFL, a self-aware personalized FL method where each client can automatically balance the training of its local personal model and the global model that implicitly contributes to other clients’ training.
1 code implementation • 22 Apr 2024 • Enmao Diao, Qi Le, Suya Wu, Xinran Wang, Ali Anwar, Jie Ding, Vahid Tarokh
We introduce Collaborative Adaptation (ColA) with Gradient Learning (GL), a parameter-free, model-agnostic fine-tuning approach that decouples the computation of the gradient of hidden representations and parameters.
no code implementations • 23 Jan 2024 • Xun Xian, Ganghua Wang, Xuan Bi, Jayanth Srinivasa, Ashish Kundu, Mingyi Hong, Jie Ding
Subsequently, we employ a classifier that is jointly trained with the watermark to detect the presence of the watermark.
no code implementations • 16 Oct 2023 • Ganghua Wang, Xun Xian, Jayanth Srinivasa, Ashish Kundu, Xuan Bi, Mingyi Hong, Jie Ding
The growing dependence on machine learning in real-world applications emphasizes the importance of understanding and ensuring its safety.
no code implementations • 12 Jun 2023 • Yu Zhang, Jia Li, Jie Ding, Xiang Li
Learning and analysis of network robustness, including controllability robustness and connectivity robustness, is critical for various networked systems against attacks.
no code implementations • 26 May 2023 • Xinran Wang, Qi Le, Ahmad Faraz Khan, Jie Ding, Ali Anwar
Collaborations among various entities, such as companies, research labs, AI agents, and edge devices, have become increasingly crucial for achieving machine learning tasks that cannot be accomplished by a single entity alone.
1 code implementation • 9 May 2023 • Enmao Diao, Eric W. Tramel, Jie Ding, Tao Zhang
Keyword Spotting (KWS) is a critical aspect of audio-based applications on mobile devices and virtual assistants.
no code implementations • 7 May 2023 • Gen Li, Ganghua Wang, Jie Ding
In this paper, the territory of LASSO is extended to two-layer ReLU neural networks, a fashionable and powerful nonlinear regression model.
no code implementations • 18 Apr 2023 • Erum Mushtaq, Yavuz Faruk Bakman, Jie Ding, Salman Avestimehr
It is a distributed learning framework naturally suitable for privacy-sensitive medical imaging datasets.
no code implementations • 15 Apr 2023 • Ahmad Faraz Khan, Xinran Wang, Qi Le, Azal Ahmad Khan, Haider Ali, Jie Ding, Ali Butt, Ali Anwar
Personalized FL has been widely used to cater to heterogeneity challenges with non-IID data.
1 code implementation • ICLR 2023 • Enmao Diao, Ganghua Wang, Jiawei Zhan, Yuhong Yang, Jie Ding, Vahid Tarokh
Our extensive experiments corroborate the hypothesis that for a generic pruning procedure, PQI decreases first when a large model is being effectively regularized and then increases when its compressibility reaches a limit that appears to correspond to the beginning of underfitting.
no code implementations • 1 Feb 2023 • Suya Wu, Enmao Diao, Taposh Banerjee, Jie Ding, Vahid Tarokh
This paper develops a new variant of the classical Cumulative Sum (CUSUM) algorithm for the quickest change detection.
no code implementations • 17 Dec 2022 • Qi Le, Enmao Diao, Xinran Wang, Ali Anwar, Vahid Tarokh, Jie Ding
Recommender Systems (RSs) have become increasingly important in many application domains, such as digital marketing.
no code implementations • 23 Jun 2022 • Xun Xian, Mingyi Hong, Jie Ding
The privacy of machine learning models has become a significant concern in many emerging Machine-Learning-as-a-Service applications, where prediction services based on well-trained models are offered to users via pay-per-query.
no code implementations • 11 Jun 2022 • Wenjing Yang, Ganghua Wang, Jie Ding, Yuhong Yang
One problem is understanding if a network is more compressible than another of the same structure.
no code implementations • 17 Apr 2022 • Huili Chen, Jie Ding, Eric Tramel, Shuang Wu, Anit Kumar Sahu, Salman Avestimehr, Tao Zhang
In the context of personalized federated learning (FL), the critical challenge is to balance local model improvement and global model tuning when the personal and global objectives may not be exactly aligned.
no code implementations • 1 Feb 2022 • Jie Ding, Eric Tramel, Anit Kumar Sahu, Shuang Wu, Salman Avestimehr, Tao Zhang
Federated learning (FL) has been developed as a promising framework to leverage the resources of edge devices, enhance customers' privacy, comply with regulations, and reduce development costs.
no code implementations • 26 Jan 2022 • Mohammadreza Soltani, Suya Wu, Yuerong Li, Jie Ding, Vahid Tarokh
We propose a new structured pruning framework for compressing Deep Neural Networks (DNNs) with skip connections, based on measuring the statistical dependency of hidden layers and predicted outputs.
no code implementations • 27 Dec 2021 • Erum Mushtaq, Chaoyang He, Jie Ding, Salman Avestimehr
However, given that clients' data are invisible to the server and data distributions are non-identical across clients, a predefined architecture discovered in a centralized setting may not be an optimal solution for all the clients in FL.
no code implementations • 25 Nov 2021 • Xingzi Xu, Ali Hasan, Khalil Elkhalil, Jie Ding, Vahid Tarokh
While NODEs model the evolution of a latent variables as the solution to an ODE, C-NODE models the evolution of the latent variables as the solution of a family of first-order quasi-linear partial differential equations (PDEs) along curves on which the PDEs reduce to ODEs, referred to as characteristic curves.
1 code implementation • 26 Oct 2021 • Enmao Diao, Vahid Tarokh, Jie Ding
Recommender Systems (RSs) are operated locally by different organizations in many realistic scenarios.
no code implementations • 29 Sep 2021 • Chaoyang He, Erum Mushtaq, Jie Ding, Salman Avestimehr
Federated Learning (FL) is an effective learning framework used when data cannotbe centralized due to privacy, communication costs, and regulatory restrictions. While there have been many algorithmic advances in FL, significantly less effort hasbeen made on model development, and most works in FL employ predefined modelarchitectures discovered in the centralized environment.
no code implementations • 29 Sep 2021 • Gen Li, Ganghua Wang, Yuantao Gu, Jie Ding
In this paper, the territory of LASSO is extended to the neural network model, a fashionable and powerful nonlinear regression model.
no code implementations • 20 Sep 2021 • Cheng Chen, Jiaying Zhou, Jie Ding, Yi Zhou
In this work, we develop an assisted learning framework for assisting organizations to improve their learning performance.
no code implementations • 14 Sep 2021 • Jiawei Zhang, Jie Ding, Yuhong Yang
A standard approach is to find the globally best modeling method from a set of candidate methods.
no code implementations • 22 Jun 2021 • Gen Li, Jie Ding
To the best of our knowledge, the rate of convergence of neural networks shown by existing works is bounded by at most the order of $n^{-1/4}$ for a sample size of $n$.
1 code implementation • 2 Jun 2021 • Enmao Diao, Jie Ding, Vahid Tarokh
However, the underlying organizations may have little interest in sharing their local data, models, and objective functions.
1 code implementation • 2 Jun 2021 • Enmao Diao, Jie Ding, Vahid Tarokh
Most existing results on Federated Learning (FL) assume the clients have ground-truth labels.
no code implementations • 18 Mar 2021 • Yuan Yang, Jie Ding
Based on that, we focus on a specific but important type of scale information, the resolution/sampling rate of time series data.
1 code implementation • 1 Jan 2021 • Mohammadreza Soltani, Suya Wu, Yuerong Li, Jie Ding, Vahid Tarokh
We measure a new model-free information between the feature maps and the output of the network.
no code implementations • 1 Jan 2021 • Gen Li, Yuantao Gu, Jie Ding
A crucial problem in neural networks is to select the most appropriate number of hidden neurons and obtain tight statistical risk bounds.
1 code implementation • 24 Dec 2020 • Jie Ding, Enmao Diao, Jiawei Zhou, Vahid Tarokh
We propose a generalized notion of Takeuchi's information criterion and prove that the proposed method can asymptotically achieve the optimal out-sample prediction loss under reasonable assumptions.
no code implementations • 21 Oct 2020 • Jiaying Zhou, Xun Xian, Na Li, Jie Ding
In this paper, we propose a method named ASCII for an agent to improve its classification performance through assistance from other agents.
3 code implementations • ICLR 2021 • Enmao Diao, Jie Ding, Vahid Tarokh
In this work, we propose a new federated learning framework named HeteroFL to address heterogeneous clients equipped with very different computation and communication capabilities.
no code implementations • 2 Oct 2020 • Gen Li, Yuantao Gu, Jie Ding
A crucial problem in neural networks is to select the most appropriate number of hidden neurons and obtain tight statistical risk bounds.
no code implementations • ICLR 2021 • Xinran Wang, Yu Xiang, Jun Gao, Jie Ding
In this work, we propose information laundering, a novel framework for enhancing model privacy.
2 code implementations • 27 Aug 2020 • Tianyang Xie, Jie Ding
An emerging number of modern applications involve forecasting time series data that exhibit both short-time dynamics and long-time seasonality.
no code implementations • 16 Jul 2020 • Mahyar Nemati, Morteza Soltani, Jie Ding, Jinho Choi
Analytical and numerical evaluations provide a proof to see the performance of the proposed method in terms of BER, data rate, and interference.
no code implementations • 12 Jul 2020 • Khalil Elkhalil, Ali Hasan, Jie Ding, Sina Farsiu, Vahid Tarokh
It has been conjectured that the Fisher divergence is more robust to model uncertainty than the conventional Kullback-Leibler (KL) divergence.
no code implementations • 14 Jun 2020 • Mahyar Nemati, Jie Ding, Jinho Choi
In this paper, we propose a new AmBC model over ambient orthogonal-frequency-division-multiplexing (OFDM) subcarriers in the frequency domain in conjunction with RIS for short-range communication scenarios.
no code implementations • 8 Jun 2020 • Jie Ding, Daiming Qu, Pei Liu, Jinho Choi
Preamble collision is a bottleneck that impairs the performance of random access (RA) user equipment (UE) in grant-free RA (GFRA).
no code implementations • 29 May 2020 • Chenglong Ye, Reza Ghanadan, Jie Ding
We propose a framework named meta clustering to address the challenge.
no code implementations • 15 May 2020 • Jiaying Zhou, Jie Ding, Kean Ming Tan, Vahid Tarokh
The main crux is to sequentially incorporate additional learners that can enhance the prediction accuracy of an existing joint model based on user-specified parameter sharing patterns across a set of learners.
no code implementations • NeurIPS 2020 • Xun Xian, Xinran Wang, Jie Ding, Reza Ghanadan
In an increasing number of AI scenarios, collaborations among different organizations or agents (e. g., human and robots, mobile units) are often essential to accomplish an organization-specific mission.
1 code implementation • 7 Feb 2020 • Enmao Diao, Jie Ding, Vahid Tarokh
In the absence of the controllers, our model reduces to non-conditional generative models.
no code implementations • NeurIPS 2019 • Jie Ding, Robert Calderbank, Vahid Tarokh
Motivated by Fisher divergence, in this paper we present a new set of information quantities which we refer to as gradient information.
1 code implementation • 8 Nov 2019 • Jiawei Zhang, Jie Ding, Yuhong Yang
For testing parametric classification models, the BAGofT has a broader scope than the existing methods since it is not restricted to specific parametric models (e. g., logistic regression).
no code implementations • 3 Nov 2019 • Yuhao Su, Jie Ding
We propose a two-stage method named variable grouping based Bayesian additive regression tree (GBART) with a well-developed python package gbart available.
no code implementations • 23 Oct 2019 • Suya Wu, Enmao Diao, Jie Ding, Vahid Tarokh
Motivated by the ever-increasing demands for limited communication bandwidth and low-power consumption, we propose a new methodology, named joint Variational Autoencoders with Bernoulli mixture models (VAB), for performing clustering in the compressed data domain.
no code implementations • 21 Oct 2019 • Chris Cannella, Jie Ding, Mohammadreza Soltani, Vahid Tarokh
In this work, we introduce a new procedure for applying Restricted Boltzmann Machines (RBMs) to missing data inference tasks, based on linearization of the effective energy function governing the distribution of observations.
no code implementations • 20 Oct 2019 • Jianyou Wang, Michael Xue, Ryan Culhane, Enmao Diao, Jie Ding, Vahid Tarokh
Speech Emotion Recognition (SER) has emerged as a critical component of the next generation human-machine interfacing technologies.
1 code implementation • 15 Oct 2019 • Cat P. Le, Yi Zhou, Jie Ding, Vahid Tarokh
Classical supervised classification tasks search for a nonlinear mapping that maps each encoded feature directly to a probability mass over the labels.
1 code implementation • 21 Aug 2019 • Enmao Diao, Jie Ding, Vahid Tarokh
Recurrent Neural Network (RNN) and its variations such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), have become standard building blocks for learning online data of sequential nature in many research areas, including natural language processing and speech data analysis.
1 code implementation • 23 Mar 2019 • Enmao Diao, Jie Ding, Vahid Tarokh
We propose a new architecture for distributed image compression from a group of distributed data sources.
no code implementations • 22 Oct 2018 • Jie Ding, Vahid Tarokh, Yuhong Yang
In the era of big data, analysts usually explore various statistical models or machine learning methods for observed data in order to facilitate scientific discoveries or gain predictive power.
no code implementations • 11 Sep 2015 • Jie Ding, Mohammad Noshad, Vahid Tarokh
We define a new distance measure between stable AR filters and draw a reference curve that is used to measure how much adding a new AR filter improves the performance of the model, and then choose the number of AR filters that has the maximum gap with the reference curve.
no code implementations • 11 Aug 2015 • Jie Ding, Vahid Tarokh, Yuhong Yang
When the data is generated from a finite order autoregression, the Bayesian information criterion is known to be consistent, and so is the new criterion.
no code implementations • 6 Jun 2015 • Jie Ding, Mohammad Noshad, Vahid Tarokh
In this work, we consider the class of multi-state autoregressive processes that can be used to model non-stationary time-series of interest.