3260 papers • 126 benchmarks • 313 datasets
Involves pretraining language models to support multi-document NLP tasks. Source: Cross-Document Language Modeling Image Credit: Cross-Document Language Modeling
(Image credit: Papersgraph)
These leaderboards are used to track progress in cross-document-language-modeling-2
Use these libraries to find cross-document-language-modeling-2 models and implementations
No subtasks available.
A new pretraining approach geared for multi-document language modeling, incorporating two key ideas into the masked language modeling self-supervised objective, which improves over recent long-range transformers by introducing dynamic global attention that has access to the entire input to predict masked tokens.
Adding a benchmark result helps the community track progress.