Selected article for: "observation step and time observation step"

Author: Martin, Alice; Ollion, Charles; Strub, Florian; Corff, Sylvain Le; Pietquin, Olivier
Title: The Monte Carlo Transformer: a stochastic self-attention model for sequence prediction
  • Cord-id: 72ki30vf
  • Document date: 2020_7_15
  • ID: 72ki30vf
    Snippet: This paper introduces the Sequential Monte Carlo Transformer, an original approach that naturally captures the observations distribution in a recurrent architecture. The keys, queries, values and attention vectors of the network are considered as the unobserved stochastic states of its hidden structure. This generative model is such that at each time step the received observation is a random function of these past states in a given attention window. In this general state-space setting, we use Se
    Document: This paper introduces the Sequential Monte Carlo Transformer, an original approach that naturally captures the observations distribution in a recurrent architecture. The keys, queries, values and attention vectors of the network are considered as the unobserved stochastic states of its hidden structure. This generative model is such that at each time step the received observation is a random function of these past states in a given attention window. In this general state-space setting, we use Sequential Monte Carlo methods to approximate the posterior distributions of the states given the observations, and then to estimate the gradient of the log-likelihood. We thus propose a generative model providing a predictive distribution, instead of a single-point estimate.

    Search related documents:
    Co phrase search for related documents
    • accurately predict and loss function: 1
    • adam algorithm and loss function: 1, 2
    • adam algorithm and lstm layer: 1
    • additional result and loss function: 1
    • local minima and long range: 1
    • local minima and loss function: 1, 2