3260 papers • 126 benchmarks • 313 datasets
Story generation is the task of automatically generating a coherent narrative, often from a set of premises or a brief summary.
(Image credit: Papersgraph)
These leaderboards are used to track progress in story-generation-8
Use these libraries to find story-generation-8 models and implementations
No subtasks available.
This work collects a large dataset of 300K human-written stories paired with writing prompts from an online forum that enables hierarchical story generation, where the model first generates a premise, and then transforms it into a passage of text.
Experiments with a latent vector planning approach based on a TD-VAE (Temporal Difference Variational Autoencoder) using the model for conditioning and reranking for text generation demonstrate strong performance in automatic cloze and swapping evaluations.
A deep learning network model, GLAC Net, is proposed that generates visual stories by combining global-local (glocal) attention and context cascading mechanisms and achieves very competitive results compared to the state-of-the-art techniques.
Experiments show that with explicit storyline planning, the generated stories are more diverse, coherent, and on topic than those generated without creating a full plan, according to both automatic and human evaluations.
PlotMachines is presented, a neural narrative model that learns to transform an outline into a coherent story by tracking the dynamic plot states, and is enriched with high-level discourse structure so that the model can learn different styles of writing corresponding to different parts of the narrative.
It is found that neural abstractive summarization models are highly prone to hallucinate content that is unfaithful to the input document and textual entailment measures better correlate with faithfulness than standard metrics, potentially leading the way to automatic evaluation metrics as well as training and decoding criteria.
This paper integrates latent representation vectors with a Transformer-based pre-trained architecture to build conditional variational autoencoder (CVAE), and demonstrates state-of-the-art conditional generation ability of the model, as well as its excellent representation learning capability and controllability.
A training-free framework for plugging in visual controls in the generation process and enabling LMs to perform multimodal tasks (e.g., image captioning) in a zero-shot manner, which outperforms the state-of-the-art method by notable margins with a nearly 27 times decoding speedup.
This paper introduces a set of 6 orthogonal and comprehensive human criteria, carefully motivated by the social sciences literature, and presents HANNA, an annotated dataset of 1,056 stories produced by 10 different ASG systems, to quantitatively evaluate the correlations of 72 automatic metrics with human criteria.
The question of event representations that provide a mid-level of abstraction between words and sentences in order to retain the semantic information of the original data while minimizing event sparsity is explored.
Adding a benchmark result helps the community track progress.