3260 papers • 126 benchmarks • 313 datasets
The task of Word Sense Disambiguation (WSD) consists of associating words in context with their most suitable entry in a pre-defined sense inventory. The de-facto sense inventory for English in WSD is WordNet. For example, given the word “mouse” and the following sentence: “A mouse consists of an object held in one's hand, with one or more buttons.” we would assign “mouse” with its electronic device sense (the 4th sense in the WordNet sense inventory).
(Image credit: Papersgraph)
These leaderboards are used to track progress in word-sense-disambiguation-6
Use these libraries to find word-sense-disambiguation-6 models and implementations
GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic.
This paper introduces and shares FlauBERT, a model learned on a very large and heterogeneous French corpus and applies it to diverse NLP tasks and shows that most of the time they outperform other pre-training approaches.
A new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) is proposed that improves the BERT and RoBERTa models using two novel techniques that significantly improve the efficiency of model pre-training and performance of downstream tasks.
A novel way of using pre-trained word representations for TM is proposed, which enhances the performance and interpretability of TM by extracting semantically related words from pre- trained word representations as input features to the TM.
A transition-based parser for AMR that parses sentences left-to-right, in linear time is described and it is shown that this parser is competitive with the state of the art on the LDC2015E86 dataset and that it outperforms state-of-the-art parsers for recovering named entities and handling polarity.
This paper construct context-gloss pairs and propose three BERT based models for WSD and fine-tune the pre-trained BERT model to achieve new state-of-the-art results on WSD task.
A knowledge-based method for Word Sense Disambiguation in the domains of biomedical and clinical text is reported, using no relational information, to obtain comparable performance to previous approaches on the MSH-WSD dataset.
This article proposes two different methods that greatly reduce the size of neural WSD models, with the benefit of improving their coverage without additional training data, and without impacting their precision.
This work proposes (supervised) word-class embeddings (WCEs), and shows that, when concatenated to (unsupervised), they substantially facilitate the training of deep-learning models in multiclass classification by topic.
This paper proposes a meta-learning framework for few-shot word sense disambiguation (WSD), where the goal is to learn todisambiguate unseen words from only a few labeled instances.
Adding a benchmark result helps the community track progress.