Selected article for: "long short term memory and lstm memory"

Author: Latif, Seemab; Bashir, Sarmad; Agha, Mir Muntasar Ali; Latif, Rabia
Title: Backward-Forward Sequence Generative Network for Multiple Lexical Constraints
  • Cord-id: 7vouj8pp
  • Document date: 2020_5_6
  • ID: 7vouj8pp
    Snippet: Advancements in Long Short Term Memory (LSTM) Networks have shown remarkable success in various Natural Language Generation (NLG) tasks. However, generating sequence from pre-specified lexical constraints is a new, challenging and less researched area in NLG. Lexical constraints take the form of words in the language model’s output to create fluent and meaningful sequences. Furthermore, most of the previous approaches cater this problem by allowing the inclusion of pre-specified lexical constr
    Document: Advancements in Long Short Term Memory (LSTM) Networks have shown remarkable success in various Natural Language Generation (NLG) tasks. However, generating sequence from pre-specified lexical constraints is a new, challenging and less researched area in NLG. Lexical constraints take the form of words in the language model’s output to create fluent and meaningful sequences. Furthermore, most of the previous approaches cater this problem by allowing the inclusion of pre-specified lexical constraints during the decoding process, which increases the decoding complexity exponentially or linearly with the number of constraints. Moreover, some of the previous approaches can only deal with single constraint. Additionally, most of the previous approaches only deal with single constraints. In this paper, we propose a novel neural probabilistic architecture based on backward-forward language model and word embedding substitution method that can cater multiple lexical constraints for generating quality sequences. Experiments shows that our proposed architecture outperforms previous methods in terms of intrinsic evaluation.

    Search related documents:
    Co phrase search for related documents
    • adam optimizer and lstm layer: 1
    • adam optimizer and lstm model: 1, 2
    • adam train and loss function: 1
    • address need and machine translation: 1