no code implementations • 7 Apr 2024 • Yukti Makhija, Priyanka Agrawal, Rishi Saket, Aravindan Raghuveer
Large language models (LLMs) are being increasingly tuned to power complex generation tasks such as writing, fact-seeking, querying and reasoning.
no code implementations • 28 Mar 2024 • Venkatesan Guruswami, Rishi Saket
This is in contrast with the work of (Saket, NeurIPS'21) which gave a $(2/5)$-approximation for learning ORs using a halfspace.
no code implementations • 16 Oct 2023 • Anand Brahmbhatt, Mohith Pokala, Rishi Saket, Aravindan Raghuveer
One of the unique properties of tabular LLP is the ability to create feature bags where all the instances in a bag have the same value for a given feature.
no code implementations • 16 Oct 2023 • Anand Brahmbhatt, Rishi Saket, Shreyas Havaldar, Anshul Nasery, Aravindan Raghuveer
Further, the $\ell_2^2$-regressor which minimizes the loss on the aggregated dataset has a loss within $\left(1 + o(1)\right)$-factor of the optimum on the original dataset w. p.
1 code implementation • 25 Jun 2022 • Jatin Chauhan, Aravindan Raghuveer, Rishi Saket, Jay Nandy, Balaraman Ravindran
Through systematic experiments across 4 datasets and 5 forecast models, we show that our technique is able to recover close to 95\% performance of the models even when only 15\% of the original variables are present.
no code implementations • NeurIPS 2021 • Rishi Saket
This bound is tight for the non-monochromatic bags case. The above is in contrast to the usual supervised learning setup (i. e., unit-sized bags) in which LTFs are efficiently learnable to arbitrary accuracy using linear programming, and even a trivial algorithm (any LTF or its complement) achieves an accuracy of $1/2$.