Author: Huang, Y.
Title: DICE: Deep Significance Clustering for Outcome-Driven Stratification Cord-id: zae2w3an Document date: 2020_10_4
ID: zae2w3an
Snippet: We present deep significance clustering (DICE), a framework for jointly performing representation learning and clustering for "outcome-driven" stratification. Motivated by practical needs in medicine to risk-stratify patients into subgroups, DICE brings self-supervision to unsupervised tasks to generate cluster membership that may be used to categorize unseen patients by risk levels. DICE is driven by a combined objective function and constraint which require a statistically significant associat
Document: We present deep significance clustering (DICE), a framework for jointly performing representation learning and clustering for "outcome-driven" stratification. Motivated by practical needs in medicine to risk-stratify patients into subgroups, DICE brings self-supervision to unsupervised tasks to generate cluster membership that may be used to categorize unseen patients by risk levels. DICE is driven by a combined objective function and constraint which require a statistically significant association between the outcome and cluster membership of learned representations. DICE also performs a neural architecture search to optimize cluster membership and hyper-parameters for model likelihood and classification accuracy. The performance of DICE was evaluated using two datasets with different outcome ratios extracted from real-world electronic health records of patients who were treated for coronavirus disease 2019 and heart failure. Outcomes are defined as in-hospital mortality (15.9%) and discharge home (36.8%), respectively. Results show that DICE has superior performance as measured by the difference in outcome distribution across clusters, Silhouette score, Calinski-Harabasz index, and Davies-Bouldin index for clustering, and Area under the ROC Curve for outcome classification compared to baseline approaches.
Search related documents:
Co phrase search for related documents- ablation study and machine learning: 1, 2
- academic medical center and logistic regression: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37
- academic medical center and low outcome: 1, 2
- academic medical center and machine learning: 1, 2, 3, 4, 5, 6, 7, 8
- log likelihood and logistic regression: 1, 2
- log likelihood and machine learning: 1, 2
- log likelihood loss and machine learning: 1
- logistic regression and low outcome: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13
- logistic regression and lstm network: 1, 2, 3, 4, 5
- logistic regression and machine learning: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74
- low outcome and lstm autoencoder: 1, 2
- low outcome and machine learning: 1, 2, 3
- lstm autoencoder and machine learning: 1, 2, 3, 4
- lstm network and machine learning: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24
Co phrase search for related documents, hyperlinks ordered by date