The first architecture that generates both audio and synchronized photo-realistic lip-sync videos from any new text is presented, and it is claimed that this architecture is the first to be composed of fully trainable neural modules.
We present ObamaNet, the first architecture that generates both audio and synchronized photo-realistic lip-sync videos from any new text. Contrary to other published lip-sync approaches, ours is only composed of fully trainable neural modules and does not rely on any traditional computer graphics methods. More precisely, we use three main modules: a text-to-speech network based on Char2Wav, a time-delayed LSTM to generate mouth-keypoints synced to the audio, and a network based on Pix2Pix to generate the video frames conditioned on the keypoints.
Kundan Kumar
3 papers
A. D. Brébisson
3 papers