The project aims to build a machine translation model that can convert Sumerian (Language used around 2000 BC) to English Language using Neural Networks. The model should be bidirectional i.e It should convert Sumerian to English as well as English to Sumerian. Neural Machine Translation (NMT) is a new and highly active approach, which has shown promising results for machine translation task. I would like to use Basic Encoder-Decoder Architecture for Machine Translation Problem both encoder and Decoder will be implemented using RNNs (specifically LSTM and GRU units) where Input encoded sentence along with previously learned word embedding as vector will be given to encoder which generates a fixed length context vector, Decoder takes that vector and will generate target language translation. But it performs poorly with longer sequences sentences, to overcome this problem we will improve it with attention-based encoder-decoder model. For complete details please have a look at PDF attached or Google Doc draft.