3260 papers • 126 benchmarks • 313 datasets
Few-Shot Relation Classification is a particular relation classification task under minimum annotated data, where a model is required to classify a new incoming query instance given only few support instances (e.g., 1 or 5) during testing. Source: MICK: A Meta-Learning Framework for Few-shot Relation Classification with Little Training Data
(Image credit: Papersgraph)
These leaderboards are used to track progress in few-shot-relation-classification
Use these libraries to find few-shot-relation-classification models and implementations
No subtasks available.
Empirical results show that even the most competitive few- shot learning models struggle on this task, especially as compared with humans, and indicate that few-shot relation classification remains an open problem and still requires further research.
This paper builds on extensions of Harris’ distributional hypothesis to relations, as well as recent advances in learning text representations (specifically, BERT), to build task agnostic relation representations solely from entity-linked text.
It is found that the state-of-the-art few-shot relation classification models struggle on these two aspects, and that the commonly-used techniques for domain adaptation and NOTA detection still cannot handle the two challenges well.
A multi-level matching and aggregation network (MLMAN) for few-shot relation classification that encodes the query instance and each support set in an interactive way by considering their matching information at both local and instance levels.
This work adapts the state-of-the-art sentence-level method MNAV to the document-level and develops it further for improved domain adaptation and finds FSDLRE to be a challenging setting with interesting new characteristics such as the ability to sample NOTA instances from the support set.
This work proposes a novel meta-information guided meta-learning (MIML) framework, where semantic concepts of classes provide strong guidance for meta- learning in both initialization and adaptation, which enables more effective initialization and faster adaptation.
The information richness embedded in business entities allows models to focus on contextual nuances, reducing their reliance on superficial clues such as relation-specific verbs, which highlights the importance of high-quality data for robust domain adaptation.
A novel approach to enhance information extraction combining multiple sentence representations and contrastive learning is introduced, validating the adaptability of the approach, maintaining robust performance in scenarios that include relation descriptions, and showcasing its flexibility to adapt to different resource constraints.
Adding a benchmark result helps the community track progress.