3260 papers • 126 benchmarks • 313 datasets
How accurately can we infer an individual’s speech style and content from his/her lip movements? [1] In this task, the model is trained on a specific speaker, or a very limited set of speakers. [1] Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis, CVPR 2020.
(Image credit: Papersgraph)
These leaderboards are used to track progress in speaker-specific-lip-to-speech-synthesis-13
Use these libraries to find speaker-specific-lip-to-speech-synthesis-13 models and implementations
No subtasks available.
Adding a benchmark result helps the community track progress.