Author: Abushaqra, Futoon M.; Xue, Hao; Ren, Yongli; Salim, Flora D.
Title: PIETS: Parallelised Irregularity Encoders for Forecasting with Heterogeneous Time-Series Cord-id: y2n6moz0 Document date: 2021_9_30
ID: y2n6moz0
Snippet: Heterogeneity and irregularity of multi-source data sets present a significant challenge to time-series analysis. In the literature, the fusion of multi-source time-series has been achieved either by using ensemble learning models which ignore temporal patterns and correlation within features or by defining a fixed-size window to select specific parts of the data sets. On the other hand, many studies have shown major improvement to handle the irregularity of time-series, yet none of these studie
Document: Heterogeneity and irregularity of multi-source data sets present a significant challenge to time-series analysis. In the literature, the fusion of multi-source time-series has been achieved either by using ensemble learning models which ignore temporal patterns and correlation within features or by defining a fixed-size window to select specific parts of the data sets. On the other hand, many studies have shown major improvement to handle the irregularity of time-series, yet none of these studies has been applied to multi-source data. In this work, we design a novel architecture, PIETS, to model heterogeneous time-series. PIETS has the following characteristics: (1) irregularity encoders for multi-source samples that can leverage all available information and accelerate the convergence of the model; (2) parallelised neural networks to enable flexibility and avoid information overwhelming; and (3) attention mechanism that highlights different information and gives high importance to the most related data. Through extensive experiments on real-world data sets related to COVID-19, we show that the proposed architecture is able to effectively model heterogeneous temporal data and outperforms other state-of-the-art approaches in the prediction task.
Search related documents:
Co phrase search for related documents- absolute mape percentage error and lstm network: 1, 2, 3, 4, 5, 6
- absolute mape percentage error and machine learning: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15
- absolute mape percentage error and machine learning model: 1, 2, 3
- absolute mape percentage error mean and lstm model: 1, 2, 3, 4, 5
- absolute mape percentage error mean and lstm network: 1, 2, 3, 4, 5, 6
- absolute mape percentage error mean and machine learning: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15
- absolute mape percentage error mean and machine learning model: 1, 2, 3
- adam optimizer and loss value: 1
- adam optimizer and lstm layer: 1
- adam optimizer and lstm model: 1, 2
- adam optimizer and machine learning: 1, 2
Co phrase search for related documents, hyperlinks ordered by date