In this work I implement and compare various seq2seq architectures on the Question Generation (inverse Question Answering) task. The dataset used was SQuAD dataset. This method is useful for generating tests from a given sample of text, hence can used in Education Sector. Also many models can be made self supervised using this or BERT can be used as it is very good at Extractive Question Answering. For example in a Entity recognition task this model can be trained to generate a question which the new BERT model uses as data to answer from the context and hence identify the Entity. Aim to extend this using newer Transformer models and include Heuristics.