3260 papers • 126 benchmarks • 313 datasets
Table-to-Text Generation is to generate a description from the structured table. Source: Key Fact as Pivot: A Two-Stage Model for Low Resource Table-to-Text Generation
(Image credit: Papersgraph)
These leaderboards are used to track progress in table-to-text-generation-6
Use these libraries to find table-to-text-generation-6 models and implementations
Prefix-tuning is proposed, a lightweight alternative to fine- Tuning for natural language generation tasks, which keeps language model parameters frozen and instead optimizes a sequence of continuous task-specific vectors, which is called the prefix.
A neural model for concept-to-text generation that scales to large, rich domains and significantly out-performs a classical Kneser-Ney language model adapted to this task by nearly 15 BLEU is introduced.
The attention visualizations and case studies show that the novel structure-aware seq2seq architecture which consists of field-gating encoder and description generator with dual attention is capable of generating coherent and informative descriptions based on the comprehensive understanding of both the content and the structure of a table.
This paper proposes an order-planning text generation model, where order information is explicitly captured by link-based attention and a self-adaptive gate combines the link- based attention with traditional content-based Attention.
This work builds a generation framework based on a pointer network which can copy facts from the input KB, and adds two attention mechanisms: (i) slot-aware attention to capture the association between a slot type and its corresponding slot value; and (ii) a new table position self-attention to captured the inter-dependencies among related slots.
A new metric is proposed, PARENT, which aligns n-grams from the reference and generated texts to the semi-structured data before computing their precision and recall, and is applicable when the reference texts are elicited from humans using the data from the WebNLG challenge.
A novel model is proposed to separate the generation of table-to-text generation into two stages: key fact prediction and surface realization, which needs much fewer annotated data and can be trained with pseudo parallel corpus.
This work develops a table cell fusion gate to combine representations from row, column and time dimension into one dense vector according to the saliency of each dimension’s representation.
Experimental results demonstrate that a basic attention-based seq2seq model trained with the exponential moving average technique achieves the state of the art in both neural table-to-text generation and neural question generation tasks for text generation from structured and unstructured data.
This paper proposes the variational template machine (VTM), a novel method to generate text descriptions from data tables, and utilizes both small parallel data and large raw text without aligned tables to enrich the template learning.
Adding a benchmark result helps the community track progress.