Search Results for author: Pratik Worah

Found 8 papers, 1 papers with code

Approximating a linear dynamical system from non-sequential data

no code implementations22 Jan 2024 Cliff Stein, Pratik Worah

Given non-sequential snapshots from instances of a dynamical system, we design a compressed sensing based algorithm that reconstructs the dynamical system.

Enhancing selectivity using Wasserstein distance based reweighing

no code implementations21 Jan 2024 Pratik Worah

Given two labeled data-sets $\mathcal{S}$ and $\mathcal{T}$, we design a simple and efficient greedy algorithm to reweigh the loss function such that the limiting distribution of the neural network weights that result from training on $\mathcal{S}$ approaches the limiting distribution that would have resulted by training on $\mathcal{T}$.

Learning Rate Schedules in the Presence of Distribution Shift

1 code implementation27 Mar 2023 Matthew Fahrbach, Adel Javanmard, Vahab Mirrokni, Pratik Worah

We design learning rate schedules that minimize regret for SGD-based online learning in the presence of a changing data distribution.

regression

Learning to Price Against a Moving Target

no code implementations8 Jun 2021 Renato Paes Leme, Balasubramanian Sivan, Yifeng Teng, Pratik Worah

In the Learning to Price setting, a seller posts prices over time with the goal of maximizing revenue while learning the buyer's valuation.

Limiting Behaviors of Nonconvex-Nonconcave Minimax Optimization via Continuous-Time Systems

no code implementations20 Oct 2020 Benjamin Grimmer, Haihao Lu, Pratik Worah, Vahab Mirrokni

Unlike nonconvex optimization, where gradient descent is guaranteed to converge to a local optimizer, algorithms for nonconvex-nonconcave minimax optimization can have topologically different solution paths: sometimes converging to a solution, sometimes never converging and instead following a limit cycle, and sometimes diverging.

The Landscape of the Proximal Point Method for Nonconvex-Nonconcave Minimax Optimization

no code implementations15 Jun 2020 Benjamin Grimmer, Haihao Lu, Pratik Worah, Vahab Mirrokni

Critically, we show this envelope not only smooths the objective but can convexify and concavify it based on the level of interaction present between the minimizing and maximizing variables.

The Spectrum of the Fisher Information Matrix of a Single-Hidden-Layer Neural Network

no code implementations NeurIPS 2018 Jeffrey Pennington, Pratik Worah

An important factor contributing to the success of deep learning has been the remarkable ability to optimize large neural networks using simple first-order optimization algorithms like stochastic gradient descent.

Nonlinear random matrix theory for deep learning

no code implementations NeurIPS 2017 Jeffrey Pennington, Pratik Worah

Neural network configurations with random weights play an important role in the analysis of deep learning.

Memorization

Cannot find the paper you are looking for? You can Submit a new open access paper.