2 code implementations • 12 Mar 2024 • Jiwoo Hong, Noah Lee, James Thorne
While recent preference alignment algorithms for language models have demonstrated promising results, supervised fine-tuning (SFT) remains imperative for achieving successful convergence.
1 code implementation • 3 Nov 2023 • Kevin Vogt-Lowell, Noah Lee, Theodoros Tsiligkaridis, Marc Vaillant
To address these gaps, we present a new recipe for few-shot fine-tuning of the popular vision-language foundation model CLIP and evaluate its performance on challenging benchmark datasets with realistic distribution shifts from the WILDS collection.
1 code implementation • 23 May 2023 • Noah Lee, Na Min An, James Thorne
Large language models (LLMs) have shown impressive achievements in solving a broad range of tasks.