3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in dialog-learning-14
No benchmarks available.
Use these libraries to find dialog-learning-14 models and implementations
No datasets available.
No subtasks available.
This work argues that the leading cause of DMs not achieving maximum performance resides in the quality of the datasets rather than the models employed thus far, and designs a synthetic dialogue generator to fully control the amount and type of errors introduced in the dataset.
Adding a benchmark result helps the community track progress.