Author: Hamdi, Ali; Aboeleneen, Amr; Shaban, Khaled
Title: MARL: Multimodal Attentional Representation Learning for Disease Prediction Cord-id: l0lu6u6d Document date: 2021_5_1
ID: l0lu6u6d
Snippet: Existing learning models often utilise CT-scan images to predict lung diseases. These models are posed by high uncertainties that affect lung segmentation and visual feature learning. We introduce MARL, a novel Multimodal Attentional Representation Learning model architecture that learns useful features from multimodal data under uncertainty. We feed the proposed model with both the lung CT-scan images and their perspective historical patients' biological records collected over times. Such rich
Document: Existing learning models often utilise CT-scan images to predict lung diseases. These models are posed by high uncertainties that affect lung segmentation and visual feature learning. We introduce MARL, a novel Multimodal Attentional Representation Learning model architecture that learns useful features from multimodal data under uncertainty. We feed the proposed model with both the lung CT-scan images and their perspective historical patients' biological records collected over times. Such rich data offers to analyse both spatial and temporal aspects of the disease. MARL employs Fuzzy-based image spatial segmentation to overcome uncertainties in CT-scan images. We then utilise a pre-trained Convolutional Neural Network (CNN) to learn visual representation vectors from images. We augment patients' data with statistical features from the segmented images. We develop a Long Short-Term Memory (LSTM) network to represent the augmented data and learn sequential patterns of disease progressions. Finally, we inject both CNN and LSTM feature vectors to an attention layer to help focus on the best learning features. We evaluated MARL on regression of lung disease progression and status classification. MARL outperforms state-of-the-art CNN architectures, such as EfficientNet and DenseNet, and baseline prediction models. It achieves a 91% R^2 score, which is higher than the other models by a range of 8% to 27%. Also, MARL achieves 97% and 92% accuracy for binary and multi-class classification, respectively. MARL improves the accuracy of state-of-the-art CNN models with a range of 19% to 57%. The results show that combining spatial and sequential temporal features produces better discriminative feature.
Search related documents:
Co phrase search for related documents- accuracy improve and long lstm short term memory network: 1, 2, 3, 4
- accuracy improve and lstm learning: 1
- accuracy improve and lstm model: 1, 2, 3, 4
- accuracy improve and lstm network: 1, 2, 3, 4
- accuracy improve and lstm short term memory: 1, 2, 3, 4, 5, 6, 7, 8
- accuracy improve and lung cancer: 1, 2, 3, 4, 5, 6, 7
- accuracy improve and lung disease: 1, 2, 3
- accuracy improve and lung image: 1, 2, 3, 4, 5
- accuracy improve and lung input: 1, 2
- accuracy improve and lung nodule: 1, 2
- accuracy improve and lung segment: 1
- accuracy improve and lung segmentation: 1, 2, 3, 4
- accuracy improve and lung tissue: 1, 2
- accuracy improvement and long lstm short term memory: 1, 2
- accuracy improvement and long lstm short term memory network: 1
- accuracy improvement and lstm cell input: 1
- accuracy improvement and lstm input: 1
- accuracy improvement and lstm model: 1, 2
- accuracy improvement and lstm network: 1, 2, 3, 4, 5
Co phrase search for related documents, hyperlinks ordered by date