Search Results for author: Lucas Weber

Found 5 papers, 1 papers with code

tinyBenchmarks: evaluating LLMs with fewer examples

2 code implementations22 Feb 2024 Felipe Maia Polo, Lucas Weber, Leshem Choshen, Yuekai Sun, Gongjun Xu, Mikhail Yurochkin

The versatility of large language models (LLMs) led to the creation of diverse benchmarks that thoroughly test a variety of language models' abilities.

Multiple-choice

The ICL Consistency Test

no code implementations8 Dec 2023 Lucas Weber, Elia Bruni, Dieuwke Hupkes

Just like the previous generation of task-tuned models, large language models (LLMs) that are adapted to tasks via prompt-based methods like in-context-learning (ICL) perform well in some setups but not in others.

In-Context Learning Natural Language Inference

Mind the instructions: a holistic evaluation of consistency and interactions in prompt-based learning

no code implementations20 Oct 2023 Lucas Weber, Elia Bruni, Dieuwke Hupkes

Finding the best way of adapting pre-trained language models to a task is a big challenge in current NLP.

In-Context Learning

Curriculum Learning with Adam: The Devil Is in the Wrong Details

no code implementations23 Aug 2023 Lucas Weber, Jaap Jumelet, Paul Michel, Elia Bruni, Dieuwke Hupkes

We present a number of different case studies with different common hand-crafted and automated CL approaches to illustrate this phenomenon, and we find that none of them outperforms optimisation with only Adam with well-chosen hyperparameters.

Language Modelling as a Multi-Task Problem

no code implementations EACL 2021 Lucas Weber, Jaap Jumelet, Elia Bruni, Dieuwke Hupkes

In this paper, we propose to study language modelling as a multi-task problem, bringing together three strands of research: multi-task learning, linguistics, and interpretability.

Language Modelling Multi-Task Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.