3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in stereotypical-bias-analysis-5
Use these libraries to find stereotypical-bias-analysis-5 models and implementations
No subtasks available.
LLaMA, a collection of foundation language models ranging from 7B to 65B parameters, is introduced and it is shown that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets.
Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, is presented, which is aimed to fully and responsibly share with interested researchers.
Galactica is introduced: a large language model that can store, combine and reason about scientific knowledge, and sets a new state-of-the-art on downstream tasks such as PubMedQA and MedMCQA dev of 77.6% and 52.9%.
Adding a benchmark result helps the community track progress.