3260 papers • 126 benchmarks • 313 datasets
Generating natural language text from a conceptualized representation, such as an ontology.
(Image credit: Papersgraph)
These leaderboards are used to track progress in concept-to-text-generation-10
Use these libraries to find concept-to-text-generation-10 models and implementations
No subtasks available.
A neural model for concept-to-text generation that scales to large, rich domains and significantly out-performs a classical Kneser-Ney language model adapted to this task by nearly 15 BLEU is introduced.
Comprehensive evaluation and analysis demonstrate that VisCTG noticeably improves model performance while successfully addressing several issues of the baseline generations, including poor commonsense, fluency, and specificity.
An in-depth qualitative analysis illustrates that SAPPHIRE effectively addresses many issues of the baseline model generations, including lack of commonsense, insufficient specificity, and poor fluency.
Adding a benchmark result helps the community track progress.