3260 papers • 126 benchmarks • 313 datasets
Text generation is the task of generating text with the goal of appearing indistinguishable to human-written text. This task if more formally known as "natural language generation" in the literature. Text generation can be addressed with Markov processes or deep generative models like LSTMs. Recently, some of the most advanced methods for text generation include BART, GPT and other GAN-based approaches. Text generation systems are evaluated either through human ratings or automatic evaluation metrics like METEOR, ROUGE, and BLEU. Further readings: The survey: Text generation models in deep learning Modern Methods for Text Generation ( Image credit: Adversarial Ranking for Language Generation )
(Image credit: Open Source)
These leaderboards are used to track progress in text-generation-17
Use these libraries to find text-generation-17 models and implementations
No datasets available.
Adding a benchmark result helps the community track progress.