In-Context Learning
444 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in In-Context Learning
Libraries
Use these libraries to find In-Context Learning models and implementationsMost implemented papers
TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second
We present TabPFN, a trained Transformer that can do supervised classification for small tabular datasets in less than a second, needs no hyperparameter tuning and is competitive with state-of-the-art classification methods.
Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers
In addition, we find Vall-E could preserve the speaker's emotion and acoustic environment of the acoustic prompt in synthesis.
From system models to class models: An in-context learning paradigm
Is it possible to understand the intricacies of a dynamical system not solely from its input/output pattern, but also by observing the behavior of other systems within the same class?
PanGu-$α$: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation
To enhance the generalization ability of PanGu-$\alpha$, we collect 1. 1TB high-quality Chinese data from a wide range of domains to pretrain the model.
Data Distributional Properties Drive Emergent In-Context Learning in Transformers
In further experiments, we found that naturalistic data distributions were only able to elicit in-context learning in transformers, and not in recurrent models.
OpenICL: An Open-Source Framework for In-context Learning
However, the implementation of ICL is sophisticated due to the diverse retrieval and inference methods involved, as well as the varying pre-processing requirements for different models, datasets, and tasks.
Enhancing In-Context Learning with Answer Feedback for Multi-Span Question Answering
Previous researches found that in-context learning is an effective approach to exploiting LLM, by using a few task-related labeled data as demonstration examples to construct a few-shot prompt for answering new questions.
What needs to go right for an induction head? A mechanistic study of in-context learning circuits and their formation
By clamping subsets of activations throughout training, we then identify three underlying subcircuits that interact to drive IH formation, yielding the phase change.
What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
GPT-3 shows remarkable in-context learning ability of large-scale language models (LMs) trained on hundreds of billion scale data.
MetaICL: Learning to Learn In Context
We introduce MetaICL (Meta-training for In-Context Learning), a new meta-training framework for few-shot learning where a pretrained language model is tuned to do in-context learning on a large set of training tasks.