Author: Wu, Wenbin; Liu, Tong; Yang, Jiahao
Title: CACRNN: A Context-Aware Attention-Based Convolutional Recurrent Neural Network for Fine-Grained Taxi Demand Prediction Cord-id: tk768wnh Document date: 2020_4_17
ID: tk768wnh
Snippet: As taxis are primary public transport in metropolises, accurately predicting fine-grained taxi demands of passengers in real time is important for guiding drivers to plan their routes and reducing the waiting time of passengers. Many efforts have been paid to provide accurate taxi demand prediction, and deep neural networks are leveraged recently. However, existing works are limited in properly incorporating multi-view taxi demand predictions together, by simply assigning fixed weights learned b
Document: As taxis are primary public transport in metropolises, accurately predicting fine-grained taxi demands of passengers in real time is important for guiding drivers to plan their routes and reducing the waiting time of passengers. Many efforts have been paid to provide accurate taxi demand prediction, and deep neural networks are leveraged recently. However, existing works are limited in properly incorporating multi-view taxi demand predictions together, by simply assigning fixed weights learned by training to the predictions of each region. To solve this problem, we apply the attention mechanism for leveraging contextual information to assist prediction, and a context-aware attention-based convolutional recurrent neural network (CACRNN) is proposed. Specially, we forecast fine-grained taxi demands with considering multi-view features, including spatial correlations among adjacent regions, short-term periodicity, long-term periodicity, and impacts of external factors. Local convolutional (LC) layers and gated recurrent units (GRUs) are utilized to extract the features from historical records. Moreover, a context-aware attention module is employed to incorporate the predictions of each region with considering different features, which is our novel attempt. This module assigns different weights to the predictions of a region according to its contextual information such as weather, index of time slots, and region function. We conduct comprehensive experiments based on a large-scale real-world dataset from New York City, and the results show that our method outperforms state-of-the-art baselines.
Search related documents:
Co phrase search for related documents- absolute error and long term short term: 1, 2, 3, 4
- absolute error and longitude latitude: 1
- absolute error and loss function: 1, 2
- absolute error and lstm network: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
- absolute error and mae absolute error: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73
- absolute error and mae absolute error mean: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73
- accurate prediction and adjacent region: 1
- accurate prediction and long term short term: 1
- accurate prediction and loss function: 1, 2
- accurate prediction and low prediction accuracy: 1, 2
- accurate prediction and lstm deep neural network: 1
- accurate prediction and lstm network: 1, 2, 3, 4
- accurate prediction and mae absolute error: 1, 2, 3, 4
- accurate prediction and mae absolute error mean: 1, 2, 3
- adam learning rate and loss function: 1, 2
- adam learning rate and lstm network: 1
Co phrase search for related documents, hyperlinks ordered by date