3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in negation-scope-resolution-7
Use these libraries to find negation-scope-resolution-7 models and implementations
No subtasks available.
The decision choices involved with using BERT, a popular transfer learning model, for this task, are explored, and state-of-the-art results for scope resolution are reported across all 3 datasets.
Three popular transformer-based architectures, BERT, XLNet and RoBERTa are applied to negation detection and scope resolution, on two publicly available datasets, BioScope Corpus and SFU Review Corpus, reporting substantial improvements over previously reported results.
A new set of annotated court decisions in German, French, and Italian is released and used to improve negation scope resolution in both zero-shot and multilingual settings, demonstrating that models pre-trained without legal data underperform in the task of negation scope resolution.
It is shown that this Multitask Learning approach outperforms the single task learning approach, and new state-of-the-art results on Negation and Speculation Scope Resolution on the BioScope Corpus and the SFU Review Corpus are reported.
Adding a benchmark result helps the community track progress.