Author: Gupta, Varun; Krzyżak, Adam
Title: An Empirical Evaluation of Attention and Pointer Networks for Paraphrase Generation Cord-id: nul9jewu Document date: 2020_5_22
ID: nul9jewu
Snippet: In computer vision, one of the common practices to augment the image dataset is by creating new images using geometric transformation preserving similarity. This data augmentation was one of the most significant factors for winning the Image Net competition in 2012 with vast neural networks. Unlike in computer vision and speech data, there have not been many techniques explored to augment data in natural language processing (NLP). The only technique explored in the text data is lexical substitut
Document: In computer vision, one of the common practices to augment the image dataset is by creating new images using geometric transformation preserving similarity. This data augmentation was one of the most significant factors for winning the Image Net competition in 2012 with vast neural networks. Unlike in computer vision and speech data, there have not been many techniques explored to augment data in natural language processing (NLP). The only technique explored in the text data is lexical substitution, which only focuses on replacing words by synonyms. In this paper, we investigate the use of different pointer networks with the sequence-to-sequence models, which have shown excellent results in neural machine translation (NMT) and text simplification tasks, in generating similar sentences using a sequence-to-sequence model and the paraphrase dataset (PPDB). The evaluation of these paraphrases is carried out by augmenting the training dataset of IMDb movie review dataset and comparing its performance with the baseline model. To our best knowledge, this is the first study on generating paraphrases using these models with the help of PPDB dataset.
Search related documents:
Co phrase search for related documents- activation function and long lstm short term memory: 1, 2
- activation function and loss function: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12
- activation function and lstm model: 1
- activation function and lstm short term memory: 1, 2
- activation function and machine learning: 1, 2, 3, 4, 5, 6
- log likelihood and loss function: 1
- log likelihood and machine learning: 1, 2
- long lstm short term memory and loss function: 1, 2, 3, 4
- long lstm short term memory and lstm model: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72
- long lstm short term memory and lstm short term memory: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73
- long lstm short term memory and machine learning: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46
- long lstm short term memory and machine translation: 1
Co phrase search for related documents, hyperlinks ordered by date