3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in contextualised-word-representations-21
No benchmarks available.
Use these libraries to find contextualised-word-representations-21 models and implementations
No datasets available.
No subtasks available.
This paper presents the first unsupervised approach to lexical semantic change that makes use of contextualised word representations, a novel method that exploits the BERT neural language model to obtain representations of word usages, clusters these representations into usage types, and measures change along time with three proposed metrics.
To explore whether syntactic probes would do better to make use of existing techniques, this work compares the structural probe to a more traditional parser with an identical lightweight parameterisation.
This paper proposes a fully unsupervised approach to improving word-in-context (WiC) representations in PLMs, achieved via a simple and efficient WiC-targeted fine-tuning procedure: MirrorWiC.
It is found that soft-prompt tuning is an efficient alternative to standard model fine-tuning and PLMs show better discrimination but worse calibration compared to simpler static word embedding models as the classification problem becomes more imbalanced.
Adding a benchmark result helps the community track progress.