3260 papers • 126 benchmarks • 313 datasets
Compared with traditional relation extraction, CRE aims to help the model learn new relations while maintaining accurate classification of old ones.
(Image credit: Papersgraph)
These leaderboards are used to track progress in continual-relation-extraction-2
No benchmarks available.
Use these libraries to find continual-relation-extraction-2 models and implementations
No datasets available.
No subtasks available.
The proposed model outperforms the state-of-the-art CRE models and has great advantage in avoiding catastrophic forgetting, as well as utilizing the memory information sufficiently and efficiently, resulting in enhanced CRE performance.
A novel curriculum-meta learning method is proposed to quickly adapt model parameters to a new task and to reduce interference of previously seen tasks on the current task to tackle catastrophic forgetting and order-sensitivity in continual relation extraction.
A consistent representation learning method is proposed, which maintains the stability of the relation embedding by adopting contrastive learning and knowledge distillation when replaying memory to alleviate forgetting effectively.
This paper encourages the model to learn more precise and robust representations through a simple yet effective adversarial class augmentation mechanism (ACA), which is easy to implement and model-agnostic and can consistently improve the performance of state-of-the-art CRE models on two popular benchmarks.
This work proposes a simple yet effective classifier decomposition framework that splits the last FFN layer into separated previous and current classifiers, so as to maintain previous knowledge and encourage the model to learn more robust representations at this training stage.
This work introduces rationale, i.e., the explanations of relation classification results generated by large language models (LLM), into CRE task and designs the multi-task rationale tuning strategy to help the model learn current relations robustly.
This work designs memory-insensitive relation prototypes and memory augmentation to overcome the overfitting problem and introduces integrated training and focal knowledge distillation to enhance the performance on analogous relations.
Adding a benchmark result helps the community track progress.