1 code implementation • 17 Feb 2021 • Ashish Katiyar, Soumya Basu, Vatsal Shah, Constantine Caramanis
Furthermore, we present a polynomial time, sample efficient algorithm that recovers the exact tree when this is possible, or up to the unidentifiability as promised by our characterization, when full recoverability is impossible.
no code implementations • 28 Nov 2020 • Vatsal Shah, Soumya Basu, Anastasios Kyrillidis, Sujay Sanghavi
In this paper, we aim to characterize the performance of adaptive methods in the over-parameterized linear regression setting.
no code implementations • 10 Jun 2020 • Ashish Katiyar, Vatsal Shah, Constantine Caramanis
We consider the task of learning Ising models when the signs of different random variables are flipped independently with possibly unequal, unknown probabilities.
1 code implementation • 10 Jan 2020 • Vatsal Shah, Xiaoxia Wu, Sujay Sanghavi
The presence of outliers can potentially significantly skew the parameters of machine learning models trained via stochastic gradient descent (SGD).
1 code implementation • ICML 2020 • John Chen, Vatsal Shah, Anastasios Kyrillidis
We introduce Negative Sampling in Semi-Supervised Learning (NS3L), a simple, fast, easy to tune algorithm for semi-supervised learning (SSL).
no code implementations • 16 Nov 2018 • Vatsal Shah, Anastasios Kyrillidis, Sujay Sanghavi
We empirically show that the minimum weight norm is not necessarily the proper gauge of good generalization in simplified scenaria, and different models found by adaptive methods could outperform plain gradient methods.
1 code implementation • ICASSP 2018 • Vatsal Shah, Vineet Gandhi
Uneven illumination and shadows in document images cause a challenge for digitization applications and automated workflows.
no code implementations • 4 May 2017 • Vatsal Shah, Nikhil Rao, Weicong Ding
While there has been recent research on incorporating explicit side information in the low-rank matrix factorization setting, often implicit information can be gleaned from the data, via higher-order interactions among entities.
no code implementations • 22 Mar 2016 • Vatsal Shah, Megasthenis Asteris, Anastasios Kyrillidis, Sujay Sanghavi
Stochastic gradient descent is the method of choice for large-scale machine learning problems, by virtue of its light complexity per iteration.