3260 papers • 126 benchmarks • 313 datasets
Given an input sentence, the task is to extract triplets consisting of the head entity, relation label, and tail entity where the relation label is not seen at the training stage.
(Image credit: Papersgraph)
These leaderboards are used to track progress in zero-shot-relation-triplet-extraction-3
Use these libraries to find zero-shot-relation-triplet-extraction-3 models and implementations
No subtasks available.
This work unify language model prompts and structured text approaches to design a structured prompt template for generating synthetic relation samples when conditioning on relation label prompts (RelationPrompt), and designs a novel Triplet Search Decoding method.
It is argued that it can be beneficial to design two distinct encoders to capture such two different types of information in the learning process, and proposed is the novel {\em table-sequence encoder} where two different encoder -- a table encoder and a sequence encoder are designed to help each other in the representation learning process.
This work proposes a novel framework, ZETT (ZEro-shot Triplet extraction by Template infilling), that aligns the task objective to the pre-training objective of generative transformers to generalize to unseen relations.
Adding a benchmark result helps the community track progress.