1 code implementation • 22 Apr 2024 • Enmao Diao, Qi Le, Suya Wu, Xinran Wang, Ali Anwar, Jie Ding, Vahid Tarokh
We introduce Collaborative Adaptation (ColA) with Gradient Learning (GL), a parameter-free, model-agnostic fine-tuning approach that decouples the computation of the gradient of hidden representations and parameters.
no code implementations • 15 Apr 2024 • Haoming Yang, Ali Hasan, Yuting Ng, Vahid Tarokh
We empirically compare the performance of the different architectures and estimators on real and synthetic datasets for time series and probabilistic modeling.
no code implementations • 27 Jan 2024 • Enmao Diao, Taposh Banerjee, Vahid Tarokh
We analyze the performance of this score-based hypothesis testing procedure and derive upper bounds on the probabilities of its Type I and II errors.
no code implementations • 21 Nov 2023 • Shyam Venkatasubramanian, Ahmed Aloui, Vahid Tarokh
Advancing loss function design is pivotal for optimizing neural network training and performance.
no code implementations • 7 Nov 2023 • Ahmed Aloui, Juncheng Dong, Cat P. Le, Vahid Tarokh
To address this, we introduce a model-agnostic data augmentation method that imputes the counterfactual outcomes for a selected subset of individuals.
no code implementations • 3 Oct 2023 • Cat P. Le, Chris Cannella, Ali Hasan, Yuting Ng, Vahid Tarokh
Transformers incorporating copula structures have demonstrated remarkable performance in time series prediction.
no code implementations • 20 Jun 2023 • Ahmed Aloui, Ali Hasan, Yuting Ng, Miroslav Pajic, Vahid Tarokh
Understanding individual treatment effects in extreme regimes is important for characterizing risks associated with different interventions.
no code implementations • 13 Jun 2023 • Ziyang Jiang, Yiling Liu, Michael H. Klein, Ahmed Aloui, Yiman Ren, Keyu Li, Vahid Tarokh, David Carlson
This is important in many scientific applications to identify the underlying mechanisms of a treatment effect.
no code implementations • 12 Jun 2023 • Juncheng Dong, Hao-Lun Hsu, Qitong Gao, Vahid Tarokh, Miroslav Pajic
In this work, we extend the two-player game by introducing an adversarial herd, which involves a group of adversaries, in order to address ($\textit{i}$) the difficulty of the inner optimization problem, and ($\textit{ii}$) the potential over pessimism caused by the selection of a candidate adversary set that may include unlikely scenarios.
no code implementations • 1 Jun 2023 • Ali Hasan, Yu Chen, Yuting Ng, Mohamed Abdelghani, Anderson Schneider, Vahid Tarokh
In this framework, we relate the return times of a diffusion in a continuous path space to new arrivals of the point process.
no code implementations • 19 May 2023 • Cat P. Le, Juncheng Dong, Ahmed Aloui, Vahid Tarokh
To this end, we introduce a new continual learning approach for conditional generative adversarial networks by leveraging a mode-affinity score specifically designed for generative modeling.
no code implementations • 11 Apr 2023 • Junrong Lin, Mahmudul Hasan, Pinar Acar, Jose Blanchet, Vahid Tarokh
Our method is effective and robust in finding optimal processing paths.
no code implementations • 14 Mar 2023 • Shyam Venkatasubramanian, Sandeep Gogineni, Bosung Kang, Ali Pezeshki, Muralidhar Rangaswamy, Vahid Tarokh
Via the use of space-time adaptive processing (STAP) techniques and convolutional neural networks, these data-driven approaches to target localization have helped benchmark the performance of neural networks for matched scenarios.
1 code implementation • ICLR 2023 • Enmao Diao, Ganghua Wang, Jiawei Zhan, Yuhong Yang, Jie Ding, Vahid Tarokh
Our extensive experiments corroborate the hypothesis that for a generic pruning procedure, PQI decreases first when a large model is being effectively regularized and then increases when its compressibility reaches a limit that appears to correspond to the beginning of underfitting.
no code implementations • 8 Feb 2023 • Juncheng Dong, Weibin Mo, Zhengling Qi, Cong Shi, Ethan X. Fang, Vahid Tarokh
The objective is to use the offline dataset to find an optimal assortment.
no code implementations • 3 Feb 2023 • Yiling Liu, Juncheng Dong, Ziyang Jiang, Ahmed Aloui, Keyu Li, Hunter Klein, Vahid Tarokh, David Carlson
To address this limitation, we propose a novel generalization bound that reweights source classification error by aligning source and target sub-domains.
no code implementations • 1 Feb 2023 • Suya Wu, Enmao Diao, Taposh Banerjee, Jie Ding, Vahid Tarokh
This paper develops a new variant of the classical Cumulative Sum (CUSUM) algorithm for the quickest change detection.
no code implementations • 17 Dec 2022 • Qi Le, Enmao Diao, Xinran Wang, Ali Anwar, Vahid Tarokh, Jie Ding
Recommender Systems (RSs) have become increasingly important in many application domains, such as digital marketing.
no code implementations • 7 Nov 2022 • Bowen Li, Suya Wu, Erin E. Tripp, Ali Pezeshki, Vahid Tarokh
We develop a recursive least square (RLS) type algorithm with a minimax concave penalty (MCP) for adaptive identification of a sparse tap-weight vector that represents a communication channel.
no code implementations • 1 Oct 2022 • Ahmed Aloui, Juncheng Dong, Cat P. Le, Vahid Tarokh
To this end, we theoretically assess the feasibility of transferring ITE knowledge and present a practical framework for efficient transfer.
no code implementations • 7 Sep 2022 • Shyam Venkatasubramanian, Sandeep Gogineni, Bosung Kang, Ali Pezeshki, Muralidhar Rangaswamy, Vahid Tarokh
Leveraging the advanced functionalities of modern radio frequency (RF) modeling and simulation tools, specifically designed for adaptive radar processing applications, this paper presents a data-driven approach to improve accuracy in radar target localization post adaptive radar detection.
no code implementations • 27 May 2022 • Yuting Ng, Ali Hasan, Vahid Tarokh
Understanding multivariate dependencies in both the bulk and the tails of a distribution is an important problem for many applications, such as ensuring algorithms are robust to observations that are infrequent but have devastating effects.
no code implementations • 26 Jan 2022 • Shyam Venkatasubramanian, Chayut Wongkamthong, Mohammadreza Soltani, Bosung Kang, Sandeep Gogineni, Ali Pezeshki, Muralidhar Rangaswamy, Vahid Tarokh
In this regard, we will generate a large, representative adaptive radar signal processing database for training and testing, analogous in spirit to the COCO dataset for natural images.
no code implementations • 26 Jan 2022 • Mohammadreza Soltani, Suya Wu, Yuerong Li, Jie Ding, Vahid Tarokh
We propose a new structured pruning framework for compressing Deep Neural Networks (DNNs) with skip connections, based on measuring the statistical dependency of hidden layers and predicted outputs.
no code implementations • 22 Jan 2022 • Juncheng Dong, Suya Wu, Mohammadreza Sultani, Vahid Tarokh
In particular, by modeling the adversaries as learning agents, we show that the proposed MAAS is able to successfully choose the transmitted channel(s) and their respective allocated power(s) without any prior knowledge of the sender strategy.
1 code implementation • 10 Jan 2022 • Mohammadreza Momenifar, Enmao Diao, Vahid Tarokh, Andrew D. Bragg
In this study, we apply a physics-informed Deep Learning technique based on vector quantization to generate a discrete, low-dimensional representation of data from simulations of three-dimensional turbulent flows.
1 code implementation • 7 Dec 2021 • Mohammadreza Momenifar, Enmao Diao, Vahid Tarokh, Andrew D. Bragg
We use a data-driven approach to model a three-dimensional turbulent flow using cutting-edge Deep Learning techniques.
no code implementations • ICLR 2022 • Juncheng Dong, Simiao Ren, Yang Deng, Omar Khatib, Jordan Malof, Mohammadreza Soltani, Willie Padilla, Vahid Tarokh
To this end, we propose a physics-infused deep neural network based on the Blaschke products for phase retrieval.
no code implementations • 25 Nov 2021 • Xingzi Xu, Ali Hasan, Khalil Elkhalil, Jie Ding, Vahid Tarokh
While NODEs model the evolution of a latent variables as the solution to an ODE, C-NODE models the evolution of the latent variables as the solution of a family of first-order quasi-linear partial differential equations (PDEs) along curves on which the PDEs reduce to ODEs, referred to as characteristic curves.
1 code implementation • NeurIPS 2021 • Yang Deng*, Juncheng Dong*, Simiao Ren*, Omar Khatib, Mohammadreza Soltani, Vahid Tarokh, Willie Padilla, Jordan Malof
Recently, it has been shown that deep learning can be an alternative solution to infer the relationship between an AEM geometry and its properties using a (relatively) small pool of CEMS data.
1 code implementation • 26 Oct 2021 • Enmao Diao, Vahid Tarokh, Jie Ding
Recommender Systems (RSs) are operated locally by different organizations in many realistic scenarios.
1 code implementation • ICLR 2022 • Cat P. Le, Juncheng Dong, Mohammadreza Soltani, Vahid Tarokh
We propose an asymmetric affinity score for representing the complexity of utilizing the knowledge of one task for learning another one.
no code implementations • 29 Sep 2021 • Chris Cannella, Vahid Tarokh
Current objective functions used for training neural MCMC proposal distributions implicitly rely on architectural restrictions to yield sensible optimization results, which hampers the development of highly expressive neural MCMC proposal architectures.
no code implementations • 3 Jun 2021 • Chris Cannella, Vahid Tarokh
Current objective functions used for training neural MCMC proposal distributions implicitly rely on architectural restrictions to yield sensible optimization results, which hampers the development of highly expressive neural MCMC proposal architectures.
1 code implementation • 2 Jun 2021 • Enmao Diao, Jie Ding, Vahid Tarokh
However, the underlying organizations may have little interest in sharing their local data, models, and objective functions.
1 code implementation • 2 Jun 2021 • Enmao Diao, Jie Ding, Vahid Tarokh
Most existing results on Federated Learning (FL) assume the clients have ground-truth labels.
1 code implementation • 31 May 2021 • Anna K. Yanchenko, Mohammadreza Soltani, Robert J. Ravier, Sayan Mukherjee, Vahid Tarokh
In this work, we instead take the perspective of relating deep features to well-studied, hand-crafted features that are meaningful for the application of interest.
1 code implementation • 23 Mar 2021 • Cat P. Le, Mohammadreza Soltani, Juncheng Dong, Vahid Tarokh
Next, we construct an online neural architecture search framework using the Fisher task distance, in which we have access to the past learned tasks.
1 code implementation • 27 Feb 2021 • Cat P. Le, Mohammadreza Soltani, Robert Ravier, Vahid Tarokh
In this paper, we propose a neural architecture search framework based on a similarity measure between some baseline tasks and a target task.
1 code implementation • 22 Feb 2021 • Yuting Ng, Ali Hasan, Khalil Elkhalil, Vahid Tarokh
We propose a new generative modeling technique for learning multidimensional cumulative distribution functions (CDFs) in the form of copulas.
no code implementations • 17 Feb 2021 • Ali Hasan, Khalil Elkhalil, Yuting Ng, Joao M. Pereira, Sina Farsiu, Jose H. Blanchet, Vahid Tarokh
We propose a novel neural network architecture that enables non-parametric calibration and generation of multivariate extreme value distributions (MEVs).
1 code implementation • 1 Jan 2021 • Mohammadreza Soltani, Suya Wu, Yuerong Li, Jie Ding, Vahid Tarokh
We measure a new model-free information between the feature maps and the output of the network.
1 code implementation • 24 Dec 2020 • Jie Ding, Enmao Diao, Jiawei Zhou, Vahid Tarokh
We propose a generalized notion of Takeuchi's information criterion and prove that the proposed method can asymptotically achieve the optimal out-sample prediction loss under reasonable assumptions.
1 code implementation • 27 Oct 2020 • Cat P. Le, Mohammadreza Soltani, Robert Ravier, Vahid Tarokh
The design of handcrafted neural networks requires a lot of time and resources.
3 code implementations • ICLR 2021 • Enmao Diao, Jie Ding, Vahid Tarokh
In this work, we propose a new federated learning framework named HeteroFL to address heterogeneous clients equipped with very different computation and communication capabilities.
no code implementations • 13 Jul 2020 • Marko Angjelichinoski, Bijan Pesaran, Vahid Tarokh
In this paper, we consider the problem of cross-subject decoding, where neural activity data collected from the prefrontal cortex of a given subject (destination) is used to decode motor intentions from the neural activity of a different subject (source).
no code implementations • ICLR 2021 • Chris Cannella, Mohammadreza Soltani, Vahid Tarokh
We introduce Projected Latent Markov Chain Monte Carlo (PL-MCMC), a technique for sampling from the high-dimensional conditional distributions learned by a normalizing flow.
no code implementations • 13 Jul 2020 • Robert J. Ravier, Mohammadreza Soltani, Miguel Simões, Denis Garagic, Vahid Tarokh
GeoStat representations are based off of a generalization of recent methods for trajectory classification, and summarize the information of a time series in terms of comprehensive statistics of (possibly windowed) distributions of easy to compute differential geometric quantities, requiring no dynamic time warping.
no code implementations • 12 Jul 2020 • Khalil Elkhalil, Ali Hasan, Jie Ding, Sina Farsiu, Vahid Tarokh
It has been conjectured that the Fisher divergence is more robust to model uncertainty than the conventional Kullback-Leibler (KL) divergence.
1 code implementation • 12 Jul 2020 • Ali Hasan, João M. Pereira, Sina Farsiu, Vahid Tarokh
We present a method for learning latent stochastic differential equations (SDEs) from high-dimensional time series data.
no code implementations • 15 May 2020 • Jiaying Zhou, Jie Ding, Kean Ming Tan, Vahid Tarokh
The main crux is to sequentially incorporate additional learners that can enhance the prediction accuracy of an existing joint model based on user-specified parameter sharing patterns across a set of learners.
no code implementations • 26 Feb 2020 • Yi Zhou, Zhe Wang, Kaiyi Ji, Yingbin Liang, Vahid Tarokh
Our APG-restart is designed to 1) allow for adopting flexible parameter restart schemes that cover many existing ones; 2) have a global sub-linear convergence rate in nonconvex and nonsmooth optimization; and 3) have guaranteed convergence to a critical point and have various types of asymptotic convergence rates depending on the parameterization of local geometry in nonconvex and nonsmooth optimization.
1 code implementation • 7 Feb 2020 • Enmao Diao, Jie Ding, Vahid Tarokh
In the absence of the controllers, our model reduces to non-conditional generative models.
no code implementations • 2 Jan 2020 • Yuting Ng, João M. Pereira, Denis Garagic, Vahid Tarokh
Marine buoys aid in the battle against Illegal, Unreported and Unregulated (IUU) fishing by detecting fishing vessels in their vicinity.
no code implementations • NeurIPS 2019 • Jie Ding, Robert Calderbank, Vahid Tarokh
Motivated by Fisher divergence, in this paper we present a new set of information quantities which we refer to as gradient information.
no code implementations • NeurIPS 2019 • Zhe Wang, Kaiyi Ji, Yi Zhou, Yingbin Liang, Vahid Tarokh
SARAH and SPIDER are two recently developed stochastic variance-reduced algorithms, and SPIDER has been shown to achieve a near-optimal first-order oracle complexity in smooth nonconvex optimization.
no code implementations • 12 Nov 2019 • Yan Zhang, Robert J. Ravier, Michael M. Zavlanos, Vahid Tarokh
In this paper, we consider the problem of distributed online convex optimization, where a network of local agents aim to jointly optimize a convex function over a period of multiple time steps.
no code implementations • 8 Nov 2019 • Marko Angjelichinoski, John Choi, Taposh Banerjee, Bijan Pesaran, Vahid Tarokh
We propose an efficient data-driven estimation approach for linear transfer functions that uses the first and second order moments of the class-conditional distributions.
no code implementations • 23 Oct 2019 • Suya Wu, Enmao Diao, Jie Ding, Vahid Tarokh
Motivated by the ever-increasing demands for limited communication bandwidth and low-power consumption, we propose a new methodology, named joint Variational Autoencoders with Bernoulli mixture models (VAB), for performing clustering in the compressed data domain.
1 code implementation • 22 Oct 2019 • Ali Hasan, João M. Pereira, Robert Ravier, Sina Farsiu, Vahid Tarokh
We develop a framework for estimating unknown partial differential equations from noisy data, using a deep learning approach.
no code implementations • 21 Oct 2019 • Chris Cannella, Jie Ding, Mohammadreza Soltani, Vahid Tarokh
In this work, we introduce a new procedure for applying Restricted Boltzmann Machines (RBMs) to missing data inference tasks, based on linearization of the effective energy function governing the distribution of observations.
no code implementations • 20 Oct 2019 • Jianyou Wang, Michael Xue, Ryan Culhane, Enmao Diao, Jie Ding, Vahid Tarokh
Speech Emotion Recognition (SER) has emerged as a critical component of the next generation human-machine interfacing technologies.
1 code implementation • 15 Oct 2019 • Cat P. Le, Yi Zhou, Jie Ding, Vahid Tarokh
Classical supervised classification tasks search for a nonlinear mapping that maps each encoded feature directly to a probability mass over the labels.
1 code implementation • 21 Aug 2019 • Enmao Diao, Jie Ding, Vahid Tarokh
Recurrent Neural Network (RNN) and its variations such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), have become standard building blocks for learning online data of sequential nature in many research areas, including natural language processing and speech data analysis.
1 code implementation • 23 Mar 2019 • Enmao Diao, Jie Ding, Vahid Tarokh
We propose a new architecture for distributed image compression from a group of distributed data sources.
no code implementations • 7 Feb 2019 • Yi Zhou, Zhe Wang, Kaiyi Ji, Yingbin Liang, Vahid Tarokh
In this paper, we develop novel momentum schemes with flexible coefficient settings to accelerate SPIDER for nonconvex and nonsmooth composite optimization, and show that the resulting algorithms achieve the near-optimal gradient oracle complexity for achieving a generalized first-order stationary condition.
no code implementations • 29 Jan 2019 • Marko Angjelichinoski, Taposh Banerjee, John Choi, Bijan Pesaran, Vahid Tarokh
We consider the problem of predicting eye movement goals from local field potentials (LFP) recorded through a multielectrode array in the macaque prefrontal cortex.
no code implementations • ICLR 2019 • Yi Zhou, Junjie Yang, Huishuai Zhang, Yingbin Liang, Vahid Tarokh
Stochastic gradient descent (SGD) has been found to be surprisingly effective in training a variety of deep neural networks.
1 code implementation • 25 Oct 2018 • Zhe Wang, Kaiyi Ji, Yi Zhou, Yingbin Liang, Vahid Tarokh
SARAH and SPIDER are two recently developed stochastic variance-reduced algorithms, and SPIDER has been shown to achieve a near-optimal first-order oracle complexity in smooth nonconvex optimization.
no code implementations • 22 Oct 2018 • Jie Ding, Vahid Tarokh, Yuhong Yang
In the era of big data, analysts usually explore various statistical models or machine learning methods for observed data in order to facilitate scientific discoveries or gain predictive power.
no code implementations • NeurIPS 2018 • Shahin Shahrampour, Vahid Tarokh
We establish an out-of-sample error bound capturing the trade-off between the error in terms of explicit features (approximation error) and the error due to spectral properties of the best model in the Hilbert space associated to the combined kernel (spectral error).
no code implementations • 10 Jun 2018 • Ilya Soloveychik, Vahid Tarokh
We consider the problem of model selection in Gaussian Markov fields in the sample deficient scenario.
no code implementations • 12 Feb 2018 • Ilya Soloveychik, Vahid Tarokh
Assuming that the entire graph can be partitioned into a number of spatial regions with similar edge parameters and reasonably regular boundaries, we develop new information-theoretic sample complexity bounds and show that a bounded number of samples can be sufficient to consistently recover these regions.
no code implementations • 19 Dec 2017 • Shahin Shahrampour, Ahmad Beirami, Vahid Tarokh
The randomized-feature approach has been successfully employed in large-scale kernel approximation and supervised learning.
no code implementations • NeurIPS 2017 • Ahmad Beirami, Meisam Razaviyayn, Shahin Shahrampour, Vahid Tarokh
Such bias is measured by the cross validation procedure in practice where the data set is partitioned into a training set used for training and a validation set, which is not used in training and is left to measure the out-of-sample performance.
no code implementations • 21 Jul 2017 • Seongah Jeong, Xiang Li, Jiarui Yang, Quanzheng Li, Vahid Tarokh
In order to address the limitations of the unsupervised DLSC-based fMRI studies, we utilize the prior knowledge of task paradigm in the learning step to train a data-driven dictionary and to model the sparse representation.
no code implementations • 9 Jul 2017 • Shahin Shahrampour, Vahid Tarokh
At each round, the budget is divided by a nonlinear function of remaining arms, and the arms are pulled correspondingly.
no code implementations • 8 Sep 2016 • Shahin Shahrampour, Mohammad Noshad, Vahid Tarokh
Based on this result, we develop an algorithm that divides the budget according to a nonlinear function of remaining arms at each round.
no code implementations • 11 Sep 2015 • Jie Ding, Mohammad Noshad, Vahid Tarokh
We define a new distance measure between stable AR filters and draw a reference curve that is used to measure how much adding a new AR filter improves the performance of the model, and then choose the number of AR filters that has the maximum gap with the reference curve.
no code implementations • 11 Aug 2015 • Jie Ding, Vahid Tarokh, Yuhong Yang
When the data is generated from a finite order autoregression, the Bayesian information criterion is known to be consistent, and so is the new criterion.
no code implementations • 6 Jun 2015 • Jie Ding, Mohammad Noshad, Vahid Tarokh
In this work, we consider the class of multi-state autoregressive processes that can be used to model non-stationary time-series of interest.