3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in dark-humor-detection-11
Use these libraries to find dark-humor-detection-11 models and implementations
No subtasks available.
This paper presents an analysis of Transformer-based language model performance across a wide range of model scales -- from models with tens of millions of parameters up to a 280 billion parameter model called Gopher.
This work trains a predicted compute-optimal model, Chinchilla, that uses the same compute budget as Gopher but with 70B parameters and 4$\times$ more more data, and reaches a state-of-the-art average accuracy, greater than a 7% improvement over Gopher.
Adding a benchmark result helps the community track progress.