Author: Wanyan, Tingyi; Honarvar, Hossein; Jaladanki, Suraj K.; Zang, Chengxi; Naik, Nidhi; Somani, Sulaiman; De Freitas, Jessica K.; Paranjpe, Ishan; Vaid, Akhil; Zhang, Jing; Miotto, Riccardo; Wang, Zhangyang; Nadkarni, Girish N.; Zitnik, Marinka; Azad, Ariful; Wang, Fei; Ding, Ying; Glicksberg, Benjamin S.
Title: Contrastive Learning Improves Critical Event Prediction in COVID-19 Patients Cord-id: q9zhi34g Document date: 2021_10_25
ID: q9zhi34g
Snippet: Deep Learning (DL) models typically require large-scale, balanced training data to be robust, generalizable, and effective in the context of healthcare. This has been a major issue for developing DL models for the coronavirus-disease 2019 (COVID-19) pandemic where data are highly class imbalanced. Conventional approaches in DL use cross-entropy loss (CEL) which often suffers from poor margin classification. We show that contrastive loss (CL) improves the performance of CEL especially in imbalanc
Document: Deep Learning (DL) models typically require large-scale, balanced training data to be robust, generalizable, and effective in the context of healthcare. This has been a major issue for developing DL models for the coronavirus-disease 2019 (COVID-19) pandemic where data are highly class imbalanced. Conventional approaches in DL use cross-entropy loss (CEL) which often suffers from poor margin classification. We show that contrastive loss (CL) improves the performance of CEL especially in imbalanced electronic health records (EHR) data for COVID-19 analyses. We use a diverse EHR data set to predict three outcomes: mortality, intubation, and intensive care unit (ICU) transfer in hospitalized COVID-19 patients over multiple time windows. To compare the performance of CEL and CL, models are tested on the full data set and a restricted data set. CL models consistently outperform CEL models with differences ranging from 0.04 to 0.15 for AUPRC and 0.05 to 0.1 for AUROC.
Search related documents:
Co phrase search for related documents- activation function and additional information: 1
- activation function and address need: 1
- activation function and long short: 1, 2
- activation function and long short term: 1, 2
- activation function and long short term memory: 1, 2
- activation function and loss function: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12
- activation function and loss function model: 1
- activation function and loss model: 1, 2
- activation function and lstm long short term memory: 1, 2
- activation function and machine learning: 1, 2, 3, 4, 5, 6
- acute kidney injury and additional analysis: 1
- acute kidney injury and additional information: 1, 2, 3
- acute kidney injury and logistic regression: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72
- acute kidney injury and long short: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
- acute kidney injury and long short term: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13
- acute kidney injury and loss function: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
- acute kidney injury and loss model: 1
- acute kidney injury and machine learning: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14
- acute kidney injury and machine learning work: 1
Co phrase search for related documents, hyperlinks ordered by date