3260 papers • 126 benchmarks • 313 datasets
Given unsupervised Language Modeling as pretraining task, the objective is to generate texts under particular control attributes (Topic, Sentiment)
(Image credit: Papersgraph)
These leaderboards are used to track progress in controllable-language-modelling-20
No benchmarks available.
Use these libraries to find controllable-language-modelling-20 models and implementations
No datasets available.
No subtasks available.
Adding a benchmark result helps the community track progress.