3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in knowledge-editing-9
Use these libraries to find knowledge-editing-9 models and implementations
No subtasks available.
This paper proposes a new task of editing language model-based KG embeddings, and proposes a simple yet strong baseline dubbed KGEditor, which utilizes additional parametric layers of the hypernetwork to edit/add facts.
This work presents a benchmark, MQuAKE (Multi-hop Question Answering for Knowledge Editing), comprising multi-hop questions that assess whether edited models correctly answer questions where the answer should change as an entailed consequence of edited facts, and proposes a simple memory-based approach, MeLLo.
Experiments show that in-context knowledge editing (IKE), without any gradient and parameter updating, achieves a competitive success rate compared to gradient-based methods on GPT-J (6B) but with much fewer side effects, including less over-editing on similar but unrelated facts and less knowledge forgetting on previously stored knowledge.
EasyEdit is proposed, an easy-to-use knowledge editing framework for LLMs that supports various cutting-edge knowledge editing approaches and can be readily applied to many well-known LLMs such as T5, GPT-J, LlaMA, etc.
This paper collects a large-scale cross-lingual synthetic dataset, conducts English editing on various knowledge editing methods covering different paradigms, and evaluates their performance in Chinese, and vice versa to figure out this cross-lingual effect in knowledge editing.
This study introduces MLaKE (Multilingual Language Knowledge Editing), a novel benchmark comprising 4072 multi-hop and 5360 single-hop questions designed to evaluate the adaptability of knowledge editing methods across five languages: English, Chinese, Japanese, French, and German.
A new benchmark named RaKE is constructed, which focuses on Relation based Knowledge Editing, and confirms that knowledge related to relations is not only stored in the FFN network but also in the attention layers, which provides experimental support for future relation-based knowledge editing methods.
A unified categorization criterion is proposed that classifies knowledge editing methods into three groups: resorting to external knowledge, merging knowledge into the model, and editing intrinsic knowledge and an in-depth analysis of knowledge location is provided, which can give a deeper understanding of the knowledge structures inherent within LLMs.
This paper shows that localization conclusions from representation denoising do not provide any insight into which model MLP layer would be best to edit in order to override an existing stored fact with a new one, and finds that which layer the authors edit is a far better predictor of performance.
It is demonstrated that a context distillation-based approach can both impart knowledge about entities and propagate that knowledge to enable broader inferences and is more effective at propagating knowledge updates than fine-tuning and other gradient-based knowledge-editing methods.
Adding a benchmark result helps the community track progress.