Selected article for: "control cell and frame image"

Author: swarna kamal paul; Saikat Jana; Parama Bhaumik
Title: A multivariate spatiotemporal spread model of COVID-19 using ensemble of ConvLSTM networks
  • Document date: 2020_4_22
  • ID: nng76upj_13
    Snippet: Recurrent neural networks (RNN) are a class of artificial neural networks with nodes having feedback connections thereby allowing it to learn patterns in variable length temporal sequences. However, it becomes difficult to learn long term dependencies for traditional RNN due to vanishing gradient problem [9] . LSTMs [10] solve the problem of learning long term dependencies by introducing a specialized memory cells as recurrent unit. The cells can.....
    Document: Recurrent neural networks (RNN) are a class of artificial neural networks with nodes having feedback connections thereby allowing it to learn patterns in variable length temporal sequences. However, it becomes difficult to learn long term dependencies for traditional RNN due to vanishing gradient problem [9] . LSTMs [10] solve the problem of learning long term dependencies by introducing a specialized memory cells as recurrent unit. The cells can selectively remember and forget long term information in its cell state through some control gates. In convolutional LSTM [5] a convolution operator is added in state to state and input to state transition. All inputs, outputs and hidden states are represented by 3D tensors having 2 spatial dimensions and 1 temporal dimension. This allows the model to capture spatial correlation along with the temporal one. In our model we configured multichannel input such that distinct features can be passed through different channels. Multiple convolutional LSTM layers are stacked sequentially to form a network with high representational capability. The network terminates with a 3D convolutional layer having one filter. This layer constructs a single channel output image as the next frame prediction. A single model may be prone to overfitting on training dataset and loose stability in terms of prediction made. Creating an ensemble of diverse models intended to solve the same task and combining the predictions made by them typically improves test accuracy and stability [12] . We used bagging or bootstrap aggregation [13] to create an ensemble of models. 60% random samples are drawn with replacement from the original training dataset and an ensemble of five models are trained individually. During prediction the output of each of the models are weighted as per following equation.

    Search related documents: