3260 papers • 126 benchmarks • 313 datasets
The task of generating text according to some pre-specified conditioning (e.g. topic or sentiment or constraint)
(Image credit: Papersgraph)
These leaderboards are used to track progress in conditional-text-generation-5
Use these libraries to find conditional-text-generation-5 models and implementations
No subtasks available.
This work considers two pragmatic modeling methods for text generation: one where pragmatics is imposed by information preservation, and another where prag matics isimposed by explicit modeling of distractors.
This paper proposes a simple yet effective framework to automatically extract, denoise, and enforce important input concepts as lexical constraints and performs comparably or better than its unconstrained counterpart on automatic metrics, demonstrates higher coverage for concept preservation, and receives better ratings in the human evaluation.
GeniusAug is proposed, which first extracts the target-aware sketches from the original training set and then generates new samples based on the sketches, and is demonstrated to be a strong and ready-to-use data augmentation tool for various natural language processing (NLP) tasks.
This work proposes an adaptation that directly injects arbitrary conditioning into self attention, an approach the authors call pseudo self attention that outperforms strong baselines, produces coherent generations, and is data efficient.
A new framework named Pre-train and Plug-in Variational Auto-Encoder (PPVAE) towards flexible conditional text generation that decouples the text generation module from the condition representation module to allow “one-to-many” conditional generation.
An open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description.
This paper decomposes conditional text generation problem into two tasks, make-a-blank and fill-in-the-blank, and extends the former to handle more complex manipulations on the given tokens, and introduces a conditional adversarial learning that allows the agents to reach a goal, producing realistic texts, in cooperative setting.
This work presents ETC-NLG, an approach leveraging topic modeling annotations to enable fully-unsupervised End-to-end Topic-Conditioned Natural Language Generation over emergent topics in unlabeled document collections.
Evaluations on style transfer tasks both with and without sequence-to-sequence supervision show that the proposed plug and play Emb2Emb method performs better than or comparable to strong baselines while being up to four times faster.
Adding a benchmark result helps the community track progress.