Selected article for: "ground glass and recent study"

Author: Xuehai He; Xingyi Yang; Shanghang Zhang; Jinyu Zhao; Yichen Zhang; Eric Xing; Pengtao Xie
Title: Sample-Efficient Deep Learning for COVID-19 Diagnosis Based on CT Scans
  • Document date: 2020_4_17
  • ID: l3f469ht_19
    Snippet: Self-supervised learning (SSL) aims to learn meaningful representations of input data without using human annotations. It creates auxiliary tasks solely using the input data and forces deep networks to learn highly-effective latent features by solving these auxiliary tasks. Various strategies have been proposed to construct auxiliary tasks, based on temporal correspondence [29] , [30] , cross-modal consistency [31] , etc. Examples of auxiliary ta.....
    Document: Self-supervised learning (SSL) aims to learn meaningful representations of input data without using human annotations. It creates auxiliary tasks solely using the input data and forces deep networks to learn highly-effective latent features by solving these auxiliary tasks. Various strategies have been proposed to construct auxiliary tasks, based on temporal correspondence [29] , [30] , cross-modal consistency [31] , etc. Examples of auxiliary tasks include rotation prediction [32] , image inpainting [33] , automatic colorization [34] , context prediction [35] , etc. Some recent works study self-supervised representation learning based on instance discrimination [36] with contrastive learning. Oord et al. propose contrastive predictive coding (CPC) to extract useful representations from high-dimensional data [37] . Bachman et al. propose a selfsupervised representation learning approach based on maximizing mutual information between features extracted from multiple views of a shared context [38] . Most recently, Chen et al. present a simple framework for contrastive learning (SimCLR) [39] with larger batch sizes and extensive data augmentation [40] , which achieves results that are comparable with supervised learning. Momentum Contrast (MoCo) [41] , [42] expands the idea of contrastive learning with an additional dictionary and a momentum encoder. While previous methods concentrate on utilizing self-supervision to learn universal Chest Computed tomography (CT) images of patients infected with 2019-nCoV on admission to hospital. A, Chest CT scan obtained on February 2, 2020, from a 39-year-old man, showing bilateral ground glass opacities. B, Chest CT scan obtained on February 6, 2020, from a 45-year-old man, showing bilateral ground glass opacities. C, Chest CT scan taken on January 27, 2020, from a 48-year-old man (discharged after treatment on day 9), showing patchy shadows. D, Chest CT scan taken on January 23, 2020, from a 34year-old man (discharged after treatment on day 11), showing patchy shadows. representations regardless of labels, our Self-Trans instead aims to boost the performance of supervised transfer learning with self-supervised pretraining on unlabeled data. Inspired by [42] , [43] , we aim to leverage self-supervised learning for COVID-CT recognition for which there are limited COVID-19 samples but abundant unlabeled CTs.

    Search related documents:
    Co phrase search for related documents