BEAR-probe (Benchmark for Evaluating Associative Reasoning)

Introduced by Wiland et al. in BEAR: A Unified Framework for Evaluating Relational Knowledge in Causal and Masked Language Models

The $\text{BEAR}$ dataset and its larger version, $\text{BEAR}_{\text{big}}$, are benchmarks for evaluating common factual knowledge contained in language models.

This dataset was created as part of the paper "BEAR: A Unified Framework for Evaluating Relational Knowledge in Causal and Masked Language Models".

For more information visit the LM Pub Quiz website.

Citation

When using the dataset or library, please cite the following paper:

@misc{wilandBEARUnifiedFramework2024,
  title = {{{BEAR}}: {{A Unified Framework}} for {{Evaluating Relational Knowledge}} in {{Causal}} and {{Masked Language Models}}},
  shorttitle = {{{BEAR}}},
  author = {Wiland, Jacek and Ploner, Max and Akbik, Alan},
  year = {2024},
  number = {arXiv:2404.04113},
  eprint = {2404.04113},
  publisher = {arXiv},
  url = {http://arxiv.org/abs/2404.04113},
}

Papers


Paper Code Results Date Stars

Dataset Loaders


No data loaders found. You can submit your data loader here.

Tasks


Similar Datasets


License


  • CC BY-SA

Modalities


Languages