Selected article for: "chest ct and deep learning"

Author: Li, Zekun; Zhao, Wei; Shi, Feng; Qi, Lei; Xie, Xingzhi; Wei, Ying; Ding, Zhongxiang; Gao, Yang; Wu, Shangjie; Liu, Jun; Shi, Yinghuan; Shen, Dinggang
Title: A novel multiple instance learning framework for COVID-19 severity assessment via data augmentation and self-supervised learning
  • Cord-id: 1qj61ked
  • Document date: 2021_2_3
  • ID: 1qj61ked
    Snippet: How to fast and accurately assess the severity level of COVID-19 is an essential problem, when millions of people are suffering from the pandemic around the world. Currently, the chest CT is regarded as a popular and informative imaging tool for COVID-19 diagnosis. However, we observe that there are two issues – weak annotation and insufficient data that may obstruct automatic COVID-19 severity assessment with CT images. To address these challenges, we propose a novel three-component method, i
    Document: How to fast and accurately assess the severity level of COVID-19 is an essential problem, when millions of people are suffering from the pandemic around the world. Currently, the chest CT is regarded as a popular and informative imaging tool for COVID-19 diagnosis. However, we observe that there are two issues – weak annotation and insufficient data that may obstruct automatic COVID-19 severity assessment with CT images. To address these challenges, we propose a novel three-component method, i.e., 1) a deep multiple instance learning component with instance-level attention to jointly classify the bag and also weigh the instances, 2) a bag-level data augmentation component to generate virtual bags by reorganizing high confidential instances, and 3) a self-supervised pretext component to aid the learning process. We have systematically evaluated our method on the CT images of 229 COVID-19 cases, including 50 severe and 179 non-severe cases. Our method could obtain an average accuracy of 95.8%, with 93.6% sensitivity and 96.4% specificity, which outperformed previous works.

    Search related documents: