no code implementations • 7 Apr 2024 • Yukti Makhija, Priyanka Agrawal, Rishi Saket, Aravindan Raghuveer
Large language models (LLMs) are being increasingly tuned to power complex generation tasks such as writing, fact-seeking, querying and reasoning.
no code implementations • 16 Oct 2023 • Anand Brahmbhatt, Mohith Pokala, Rishi Saket, Aravindan Raghuveer
One of the unique properties of tabular LLP is the ability to create feature bags where all the instances in a bag have the same value for a given feature.
no code implementations • 16 Oct 2023 • Anand Brahmbhatt, Rishi Saket, Shreyas Havaldar, Anshul Nasery, Aravindan Raghuveer
Further, the $\ell_2^2$-regressor which minimizes the loss on the aggregated dataset has a loss within $\left(1 + o(1)\right)$-factor of the optimum on the original dataset w. p.
no code implementations • 12 Oct 2023 • Shreyas Havaldar, Navodita Sharma, Shubhi Sareen, Karthikeyan Shanmugam, Aravindan Raghuveer
We then use Belief Propagation (BP) to marginalize the Gibbs distribution to obtain pseudo labels.
1 code implementation • 11 Oct 2023 • Shreyas Havaldar, Jatin Chauhan, Karthikeyan Shanmugam, Jay Nandy, Aravindan Raghuveer
Our third contribution is theoretical, where we show that our weighted entropy term along with prediction loss on the training set approximates test loss under covariate shift.
no code implementations • 6 Jul 2023 • Abhirut Gupta, Ananya B. Sai, Richard Sproat, Yuri Vasilevski, James S. Ren, Ambarish Jash, Sukhdeep S. Sodhi, Aravindan Raghuveer
To the best of our knowledge, FunGLUE is the first benchmark to introduce L1-L2 interactions in text.
no code implementations • 19 May 2023 • Deepanway Ghosal, Preksha Nema, Aravindan Raghuveer
The task of table summarization involves generating text that both succinctly and accurately represents the table or a specific set of highlighted cells within a table.
no code implementations • 3 Dec 2022 • Anubhav Jangra, Preksha Nema, Aravindan Raghuveer
In this work, we study the usefulness of Abstract Meaning Representation (AMR) graph as the intermediate style agnostic representation.
1 code implementation • 25 Jun 2022 • Jatin Chauhan, Aravindan Raghuveer, Rishi Saket, Jay Nandy, Balaraman Ravindran
Through systematic experiments across 4 datasets and 5 forecast models, we show that our technique is able to recover close to 95\% performance of the models even when only 15\% of the original variables are present.
no code implementations • EMNLP 2021 • Sahana Ramnath, Melvin Johnson, Abhirut Gupta, Aravindan Raghuveer
For such cases, we propose training the model with additional hints (as target tags on the decoder) that provide information about the operation required on the source (translation or both translation and transliteration).