Search Results for author: Rishi Sonthalia

Found 14 papers, 6 papers with code

Discrete error dynamics of mini-batch gradient descent for least squares regression

no code implementations6 Jun 2024 Jackie Lok, Rishi Sonthalia, Elizaveta Rebrova

We study the discrete dynamics of mini-batch gradient descent for least squares regression when sampling without replacement.

Near-Interpolators: Rapid Norm Growth and the Trade-Off between Interpolation and Generalization

1 code implementation12 Mar 2024 Yutong Wang, Rishi Sonthalia, Wei Hu

Under a random matrix theoretic assumption on the data distribution and an eigendecay assumption on the data covariance matrix $\boldsymbol{\Sigma}$, we demonstrate that any near-interpolator exhibits rapid norm growth: for $\tau$ fixed, $\boldsymbol{\beta}$ has squared $\ell_2$-norm $\mathbb{E}[\|{\boldsymbol{\beta}}\|_{2}^{2}] = \Omega(n^{\alpha})$ where $n$ is the number of samples and $\alpha >1$ is the exponent of the eigendecay, i. e., $\lambda_i(\boldsymbol{\Sigma}) \sim i^{-\alpha}$.

Spectral Neural Networks: Approximation Theory and Optimization Landscape

no code implementations1 Oct 2023 Chenghui Li, Rishi Sonthalia, Nicolas Garcia Trillos

There is a large variety of machine learning methodologies that are based on the extraction of spectral geometric information from data.

Least Squares Regression Can Exhibit Under-Parameterized Double Descent

no code implementations24 May 2023 Xinyue Li, Rishi Sonthalia

The relationship between the number of training data points, the number of parameters, and the generalization capabilities has been widely studied.

Denoising regression

How can classical multidimensional scaling go wrong?

1 code implementation NeurIPS 2021 Rishi Sonthalia, Gregory Van Buskirk, Benjamin Raichel, Anna C. Gilbert

While $D_l$ is not metric, when given as input to cMDS instead of $D$, it empirically results in solutions whose distance to $D$ does not increase when we increase the dimension and the classification accuracy degrades less than the cMDS solution.

Project and Forget: Solving Large-Scale Metric Constrained Problems

1 code implementation8 May 2020 Rishi Sonthalia, Anna C. Gilbert

Given a set of dissimilarity measurements amongst data points, determining what metric representation is most "consistent" with the input measurements or the metric that best captures the relevant geometric features of the data is a key step in many machine learning algorithms.

Clustering Metric Learning

Tree! I am no Tree! I am a Low Dimensional Hyperbolic Embedding

3 code implementations NeurIPS 2020 Rishi Sonthalia, Anna C. Gilbert

Given data, finding a faithful low-dimensional hyperbolic embedding of the data is a key method by which we can extract hierarchical information or learn representative geometric features of the data.

Project and Forget: Solving Large Scale Metric Constrained Problems

no code implementations25 Sep 2019 Anna C. Gilbert, Rishi Sonthalia

Given a set of distances amongst points, determining what metric representation is most “consistent” with the input distances or the metric that captures the relevant geometric features of the data is a key step in many machine learning algorithms.

Metric Learning

Unsupervised Metric Learning in Presence of Missing Data

3 code implementations19 Jul 2018 Anna C. Gilbert, Rishi Sonthalia

Here, we present a new algorithm MR-MISSING that extends these previous algorithms and can be used to compute low dimensional representation on data sets with missing entries.

Dimensionality Reduction Matrix Completion +1

Cannot find the paper you are looking for? You can Submit a new open access paper.