It is demonstrated that neural sequence-to-sequence models obtain state of the art or close to state-of-the- art results on existing datasets.
Transliteration is a key component of machine translation systems and software internationalization. This paper demonstrates that neural sequence-to-sequence models obtain state of the art or close to state of the art results on existing datasets. In an effort to make machine transliteration accessible, we open source a new Arabic to English transliteration dataset and our trained models.