no code implementations • 22 Jan 2024 • Cliff Stein, Pratik Worah
Given non-sequential snapshots from instances of a dynamical system, we design a compressed sensing based algorithm that reconstructs the dynamical system.
no code implementations • 21 Jan 2024 • Pratik Worah
Given two labeled data-sets $\mathcal{S}$ and $\mathcal{T}$, we design a simple and efficient greedy algorithm to reweigh the loss function such that the limiting distribution of the neural network weights that result from training on $\mathcal{S}$ approaches the limiting distribution that would have resulted by training on $\mathcal{T}$.
1 code implementation • 27 Mar 2023 • Matthew Fahrbach, Adel Javanmard, Vahab Mirrokni, Pratik Worah
We design learning rate schedules that minimize regret for SGD-based online learning in the presence of a changing data distribution.
no code implementations • 8 Jun 2021 • Renato Paes Leme, Balasubramanian Sivan, Yifeng Teng, Pratik Worah
In the Learning to Price setting, a seller posts prices over time with the goal of maximizing revenue while learning the buyer's valuation.
no code implementations • 20 Oct 2020 • Benjamin Grimmer, Haihao Lu, Pratik Worah, Vahab Mirrokni
Unlike nonconvex optimization, where gradient descent is guaranteed to converge to a local optimizer, algorithms for nonconvex-nonconcave minimax optimization can have topologically different solution paths: sometimes converging to a solution, sometimes never converging and instead following a limit cycle, and sometimes diverging.
no code implementations • 15 Jun 2020 • Benjamin Grimmer, Haihao Lu, Pratik Worah, Vahab Mirrokni
Critically, we show this envelope not only smooths the objective but can convexify and concavify it based on the level of interaction present between the minimizing and maximizing variables.
no code implementations • NeurIPS 2018 • Jeffrey Pennington, Pratik Worah
An important factor contributing to the success of deep learning has been the remarkable ability to optimize large neural networks using simple first-order optimization algorithms like stochastic gradient descent.
no code implementations • NeurIPS 2017 • Jeffrey Pennington, Pratik Worah
Neural network configurations with random weights play an important role in the analysis of deep learning.