3260 papers • 126 benchmarks • 313 datasets
Given a corpus and a target term (hyponym), the task of hypernym discovery consists of extracting a set of its most appropriate hypernyms from the corpus. For example, for the input word “dog”, some valid hypernyms would be “canine”, “mammal” or “animal”.
(Image credit: Papersgraph)
These leaderboards are used to track progress in hypernym-discovery-4
Use these libraries to find hypernym-discovery-4 models and implementations
No subtasks available.
This work presents a novel method to embed directed acyclic graphs through hierarchical relations as partial orders defined using a family of nested geodesically convex cones and proves that these entailment cones admit an optimal shape with a closed form expression both in the Euclidean and hyperbolic spaces.
Comparison to the state-of-the-art supervised methods shows that while supervised methods generally outperform the unsupervised ones, the former are sensitive to the distribution of training instances, hurting their reliability.
This system exploits a combination of supervised projection learning and unsupervised pattern-based hypernym discovery that was ranked first on the 3 sub-tasks for which it submitted results.
A manually improved dataset is provided for lexical-semantic relation prediction and its impact across three pre-trained neural language models is evaluated to reveal strong performance divergences between languages and confusions of specific relations.
This work studies the possibility of fine-tuning language models to explicitly model concepts and their properties and shows that the resulting encoders allow us to predict commonsense properties with much higher accuracy than is possible by directly fine- tuning language models.
TaxoLLaMA, the everything-in-one model, lightweight due to 4-bit quantization and LoRA, demonstrates very strong zero-shot performance on Lexical Entailment and Taxonomy Construction with no fine-tuning and explores its hidden multilingual and domain adaptation capabilities with a little tuning or few-shot learning.
Adding a benchmark result helps the community track progress.