no code implementations • 26 Jan 2022 • Shyam Venkatasubramanian, Chayut Wongkamthong, Mohammadreza Soltani, Bosung Kang, Sandeep Gogineni, Ali Pezeshki, Muralidhar Rangaswamy, Vahid Tarokh
In this regard, we will generate a large, representative adaptive radar signal processing database for training and testing, analogous in spirit to the COCO dataset for natural images.
no code implementations • 26 Jan 2022 • Mohammadreza Soltani, Suya Wu, Yuerong Li, Jie Ding, Vahid Tarokh
We propose a new structured pruning framework for compressing Deep Neural Networks (DNNs) with skip connections, based on measuring the statistical dependency of hidden layers and predicted outputs.
no code implementations • ICLR 2022 • Juncheng Dong, Simiao Ren, Yang Deng, Omar Khatib, Jordan Malof, Mohammadreza Soltani, Willie Padilla, Vahid Tarokh
To this end, we propose a physics-infused deep neural network based on the Blaschke products for phase retrieval.
1 code implementation • NeurIPS 2021 • Yang Deng*, Juncheng Dong*, Simiao Ren*, Omar Khatib, Mohammadreza Soltani, Vahid Tarokh, Willie Padilla, Jordan Malof
Recently, it has been shown that deep learning can be an alternative solution to infer the relationship between an AEM geometry and its properties using a (relatively) small pool of CEMS data.
1 code implementation • ICLR 2022 • Cat P. Le, Juncheng Dong, Mohammadreza Soltani, Vahid Tarokh
We propose an asymmetric affinity score for representing the complexity of utilizing the knowledge of one task for learning another one.
1 code implementation • 31 May 2021 • Anna K. Yanchenko, Mohammadreza Soltani, Robert J. Ravier, Sayan Mukherjee, Vahid Tarokh
In this work, we instead take the perspective of relating deep features to well-studied, hand-crafted features that are meaningful for the application of interest.
1 code implementation • 23 Mar 2021 • Cat P. Le, Mohammadreza Soltani, Juncheng Dong, Vahid Tarokh
Next, we construct an online neural architecture search framework using the Fisher task distance, in which we have access to the past learned tasks.
1 code implementation • 27 Feb 2021 • Cat P. Le, Mohammadreza Soltani, Robert Ravier, Vahid Tarokh
In this paper, we propose a neural architecture search framework based on a similarity measure between some baseline tasks and a target task.
1 code implementation • 1 Jan 2021 • Mohammadreza Soltani, Suya Wu, Yuerong Li, Jie Ding, Vahid Tarokh
We measure a new model-free information between the feature maps and the output of the network.
1 code implementation • 27 Oct 2020 • Cat P. Le, Mohammadreza Soltani, Robert Ravier, Vahid Tarokh
The design of handcrafted neural networks requires a lot of time and resources.
no code implementations • ICLR 2021 • Chris Cannella, Mohammadreza Soltani, Vahid Tarokh
We introduce Projected Latent Markov Chain Monte Carlo (PL-MCMC), a technique for sampling from the high-dimensional conditional distributions learned by a normalizing flow.
no code implementations • 13 Jul 2020 • Robert J. Ravier, Mohammadreza Soltani, Miguel Simões, Denis Garagic, Vahid Tarokh
GeoStat representations are based off of a generalization of recent methods for trajectory classification, and summarize the information of a time series in terms of comprehensive statistics of (possibly windowed) distributions of easy to compute differential geometric quantities, requiring no dynamic time warping.
no code implementations • 7 Jul 2020 • Minsu Cho, Mohammadreza Soltani, Chinmay Hegde
In this paper, we study two important problems in the automated design of neural networks -- Hyper-parameter Optimization (HPO), and Neural Architecture Search (NAS) -- through the lens of sparse recovery methods.
no code implementations • 21 Oct 2019 • Chris Cannella, Jie Ding, Mohammadreza Soltani, Vahid Tarokh
In this work, we introduce a new procedure for applying Restricted Boltzmann Machines (RBMs) to missing data inference tasks, based on linearization of the effective energy function governing the distribution of observations.
1 code implementation • 7 Jun 2019 • Minsu Cho, Mohammadreza Soltani, Chinmay Hegde
Neural Architecture Search remains a very challenging meta-learning problem.
no code implementations • ICLR 2019 • Mohammadreza Soltani, Swayambhoo Jain, Abhinav V. Sambasivan
Recently, Generative Adversarial Networks (GANs) have emerged as a popular alternative for modeling complex high dimensional distributions.
no code implementations • ICLR Workshop DeepGenStruct 2019 • Mohammadreza Soltani, Swayambhoo Jain, Abhinav Sambasivan
In this paper, we consider the observation setting in which the samples from a target distribution are given by the superposition of two structured components, and leverage GANs for learning of the structure of the components.
no code implementations • 12 Feb 2019 • Mohammadreza Soltani, Swayambhoo Jain, Abhinav Sambasivan
Recently, Generative Adversarial Networks (GANs) have emerged as a popular alternative for modeling complex high dimensional distributions.
no code implementations • 8 Dec 2017 • Mohammadreza Soltani, Chinmay Hegde
In this paper, we provide a novel algorithmic framework that achieves the best of both worlds: asymptotically as fast as factorization methods, while requiring no dependency on the condition number.
no code implementations • 29 Sep 2017 • Viraj Shah, Mohammadreza Soltani, Chinmay Hegde
We consider the problem of reconstructing signals and images from periodic nonlinearities.
no code implementations • 8 Aug 2017 • Mohammadreza Soltani, Chinmay Hegde
We consider the demixing problem of two (or more) structured high-dimensional vectors from a limited number of nonlinear observations where this nonlinearity is due to either a periodic or an aperiodic function.
no code implementations • 27 Jun 2017 • Mohammadreza Soltani, Chinmay Hegde
Existing methods for this problem assume that the precision matrix of the observed variables is the superposition of a sparse and a low-rank component.
no code implementations • 21 May 2017 • Mohammadreza Soltani, Chinmay Hegde
We consider the problem of estimation of a low-rank matrix from a limited number of noisy rank-one projections.
no code implementations • 23 Jan 2017 • Mohammadreza Soltani, Chinmay Hegde
Random sinusoidal features are a popular approach for speeding up kernel-based inference in large datasets.
no code implementations • 23 Jan 2017 • Mohammadreza Soltani, Chinmay Hegde
Specifically, we show that for certain types of structured superposition models, our method provably recovers the components given merely $n = \mathcal{O}(s)$ samples where $s$ denotes the number of nonzero entries in the underlying components.
no code implementations • 3 Aug 2016 • Mohammadreza Soltani, Chinmay Hegde
We study the problem of demixing a pair of sparse signals from noisy, nonlinear observations of their superposition.