Selected article for: "international license and learning model"

Author: Xuehai He; Xingyi Yang; Shanghang Zhang; Jinyu Zhao; Yichen Zhang; Eric Xing; Pengtao Xie
Title: Sample-Efficient Deep Learning for COVID-19 Diagnosis Based on CT Scans
  • Document date: 2020_4_17
  • ID: l3f469ht_70
    Snippet: • Method 1: Randomly initialize weights. Perform SSL on the COVID19-CT dataset without using COVID/Non-COVID labels. Then fine-tune on COVID19-CT using labels. • Method 2: Pretrain on ImageNet. Perform SSL on COVID19-CT without using labels and with pretrained weights. Then fine-tune on COVID19-CT using labels. • Method 3: Pretrain on ImageNet. Perform SSL on the LUNA dataset without using labels of LUNA. Then finetune on COVID19-CT using l.....
    Document: • Method 1: Randomly initialize weights. Perform SSL on the COVID19-CT dataset without using COVID/Non-COVID labels. Then fine-tune on COVID19-CT using labels. • Method 2: Pretrain on ImageNet. Perform SSL on COVID19-CT without using labels and with pretrained weights. Then fine-tune on COVID19-CT using labels. • Method 3: Pretrain on ImageNet. Perform SSL on the LUNA dataset without using labels of LUNA. Then finetune on COVID19-CT using labels. • Method 4: Pretrain on ImageNet. Perform the auxiliary task of rotation predication as SSL baseline. Jointly learn rotation prediction and COVID19-CT classification. • Self-Trans: Pretrain on ImageNet. Perform SSL on LUNA without using labels of LUNA. Then perform SSL on COVID-CT without using labels of COVID-CT, and finally fine-tune on COVID19-CT using labels. Table VI shows the results in different ablation study settings, conducted on ResNet-50 and DenseNet-169 backbones. From this table, we observe the following. First, Method 2 which performs SSL on top of transfer learning works much better than Method 1 which performs SSL without using transfer learning. This demonstrates that it is more effective to apply SSL on pretrained weights instead of from scratch. Second, Method 2 which performs SSL on COVID19-CT (without using labels) largely outperforms Method 3 which performs SSL on LUNA. This implies that it is more effective to apply SSL directly on the data in the target task than on external data. Third, comparing Method 3 and Self-Trans, it is further confirmed that performing SSL directly on the data of the target task achieves better performance (by Self-Trans). Forth, Method 4 with an SSL auxiliary task beats the vanilla transfer learning counterpart, but do not surpass the SSL model with contrastive learning in Method 2 and Self-Trans. Such experimental results not only illustrate the effectiveness of SSL, but also provide concrete evidence that CSSL has stronger feature representation learning capabilities than traditional SSL methods. Fifth, Self-Trans performs slightly better than Method 2, which demonstrates that performing . CC-BY-NC-ND 4.0 International license It is made available under a author/funder, who has granted medRxiv a license to display the preprint in perpetuity.

    Search related documents:
    Co phrase search for related documents
    • auxiliary task and classification rotation prediction: 1
    • auxiliary task and contrastive learning: 1, 2, 3
    • classification rotation prediction and contrastive learning: 1