3260 papers • 126 benchmarks • 313 datasets
Knowledge-graph-to-text (KG-to-text) generation aims to generate high-quality texts which are consistent with input graphs. Description from: JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs
(Image credit: Papersgraph)
These leaderboards are used to track progress in kg-to-text-generation-4
Use these libraries to find kg-to-text-generation-4 models and implementations
No subtasks available.
It is suggested that the PLMs benefit from similar facts seen during pretraining or fine-tuning, such that they perform well even when the input graph is reduced to a simple bag of node and edge labels.
This work addresses the problem of generating coherent multi-sentence texts from the output of an information extraction system, and in particular a knowledge graph by introducing a novel graph transforming encoder which can leverage the relational structure of such knowledge graphs without imposing linearization or hierarchical constraints.
This paper proposes an alternative encoder based on graph convolutional networks that directly exploits the input structure and reports results on two graph-to-sequence datasets that empirically show the benefits of explicitly encoding the input graph structure.
It is shown that rare items strongly impact performance and that combining delexicalisation and copying yields the strongest improvement; that copying underperforms for rare and unseen items and that the impact of these two mechanisms greatly varies depending on how the dataset is constructed and on how it is split into train, dev and test.
This work proposes to apply a bidirectional Graph2Seq model to encode the KG subgraph, and enhances the RNN decoder with a node-level copying mechanism to allow direct copying of node attributes from the KG subgraph to the output question.
This work gathers both encoding strategies, proposing novel neural models that encode an input graph combining both global and local node contexts, in order to learn better contextualized node embeddings.
A large-scale and challenging dataset that involves retrieving abundant knowledge of various types of main entities from a large knowledge graph (KG), which makes the current graph-to-sequence models severely suffer from the problems of information loss and parameter explosion while generating the descriptions.
This paper studies how to automatically generate a natural language text that describes the facts in knowledge graph (KG) and makes three major technical contributions, namely representation alignment for bridging the semantic gap between KG encodings and PLMs, relation-biased KG linearization for deriving better input representations, and multi-task learning for learning the correspondence betweenKG and text.
A Deep ReAder-Writer (DRAW) network, which consists of a Reader that can extract knowledge graphs from input paragraphs and discover potential knowledge, a graph-to-text Writer that generates a novel paragraph, and a Reviewer that reviews the generated paragraph from three different aspects.
A knowledge-grounded pre-training (KGPT), which consists of two parts, 1) a general knowledge-grounded generation model to generate knowledge-enriched text and 2) a pre-training paradigm on a massive knowledge-grounded text corpus crawled from the web.
Adding a benchmark result helps the community track progress.