This project is a sequence to sequence model for German to English translation utilizing Sequence to Sequence models with attention and transformer models and is based on Effective Approaches to Attentionbased Neural Machine Translator.
Abstract: The project's novelty is not merely importing modules and preparing data and feeding the data to the model but understanding how the real language translation works and implementing the logics underlying each method utilized and creating every function from scratch, resulting in the creationof a Neural Machine Translation model. Initially, translation was accomplished by simply substituting words from one language for those from another. However, because languages are essentially different, a greater degree of knowledge (e.g., phrases/sentences) is required to achieveeffective results. With the introduction of deep learning, modern software now employs statisticaland neural techniques that have been shown to be more effective when translating. We are essentially translating German to English utilizing Sequence to Sequence models with attention and transformer models. Of course, everyone has access to Google Translates power, but if you want to learn how to implement translation in code, this project will show you how. We are writingour code from scratch, without using any libraries,in order to understand how each model works.While this design is a little out of date, it is still a great project to work on if you want to learn more about attention processes before moving on to Transformers. It is based on Effective Approaches to Attentionbased Neural Machine Translator and is a sequence to sequence (seq2seq) model for German to English translation.