Author: Xuehai He; Xingyi Yang; Shanghang Zhang; Jinyu Zhao; Yichen Zhang; Eric Xing; Pengtao Xie
Title: Sample-Efficient Deep Learning for COVID-19 Diagnosis Based on CT Scans Document date: 2020_4_17
ID: l3f469ht_19
Snippet: Self-supervised learning (SSL) aims to learn meaningful representations of input data without using human annotations. It creates auxiliary tasks solely using the input data and forces deep networks to learn highly-effective latent features by solving these auxiliary tasks. Various strategies have been proposed to construct auxiliary tasks, based on temporal correspondence [29] , [30] , cross-modal consistency [31] , etc. Examples of auxiliary ta.....
Document: Self-supervised learning (SSL) aims to learn meaningful representations of input data without using human annotations. It creates auxiliary tasks solely using the input data and forces deep networks to learn highly-effective latent features by solving these auxiliary tasks. Various strategies have been proposed to construct auxiliary tasks, based on temporal correspondence [29] , [30] , cross-modal consistency [31] , etc. Examples of auxiliary tasks include rotation prediction [32] , image inpainting [33] , automatic colorization [34] , context prediction [35] , etc. Some recent works study self-supervised representation learning based on instance discrimination [36] with contrastive learning. Oord et al. propose contrastive predictive coding (CPC) to extract useful representations from high-dimensional data [37] . Bachman et al. propose a selfsupervised representation learning approach based on maximizing mutual information between features extracted from multiple views of a shared context [38] . Most recently, Chen et al. present a simple framework for contrastive learning (SimCLR) [39] with larger batch sizes and extensive data augmentation [40] , which achieves results that are comparable with supervised learning. Momentum Contrast (MoCo) [41] , [42] expands the idea of contrastive learning with an additional dictionary and a momentum encoder. While previous methods concentrate on utilizing self-supervision to learn universal Chest Computed tomography (CT) images of patients infected with 2019-nCoV on admission to hospital. A, Chest CT scan obtained on February 2, 2020, from a 39-year-old man, showing bilateral ground glass opacities. B, Chest CT scan obtained on February 6, 2020, from a 45-year-old man, showing bilateral ground glass opacities. C, Chest CT scan taken on January 27, 2020, from a 48-year-old man (discharged after treatment on day 9), showing patchy shadows. D, Chest CT scan taken on January 23, 2020, from a 34year-old man (discharged after treatment on day 11), showing patchy shadows. representations regardless of labels, our Self-Trans instead aims to boost the performance of supervised transfer learning with self-supervised pretraining on unlabeled data. Inspired by [42] , [43] , we aim to leverage self-supervised learning for COVID-CT recognition for which there are limited COVID-19 samples but abundant unlabeled CTs.
Search related documents:
Co phrase search for related documents- automatic colorization and deep network: 1
- auxiliary task and CT Chest Computed tomography: 1
- auxiliary task and deep network: 1
- batch size and CT Chest Computed tomography: 1
- batch size and deep network: 1, 2, 3, 4
- bilateral ground glass opacity and CT Chest Computed tomography: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15
- bilateral ground glass opacity and ground glass: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25
- bilateral ground glass opacity and ground glass opacity: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25
- bilateral ground glass opacity show and ground glass: 1, 2
- bilateral ground glass opacity show and ground glass opacity: 1, 2
- Chest Computed tomography and CT Chest Computed tomography: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25
- Chest Computed tomography and deep network: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25
- Chest Computed tomography and ground glass: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25
- Chest Computed tomography and ground glass opacity: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25
- Chest CT scan and CT Chest Computed tomography: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25
- Chest CT scan and deep network: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
- Chest CT scan and ground glass: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25
- Chest CT scan and ground glass opacity: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25
- context prediction and ground glass: 1, 2
Co phrase search for related documents, hyperlinks ordered by date