Author: Zheng, Wenbo; Yan, Lan; Gou, Chao; Zhang, Zhiâ€Cheng; Zhang, Jun J.; Hu, Ming; Wang, Feiâ€Yue
Title: Learning to learn by yourself: Unsupervised metaâ€learning with selfâ€knowledge distillation for COVIDâ€19 diagnosis from pneumonia cases Cord-id: rgg92m0c Document date: 2021_5_13
ID: rgg92m0c
Snippet: The goal of diagnosing the coronavirus disease 2019 (COVIDâ€19) from suspected pneumonia cases, that is, recognizing COVIDâ€19 from chest Xâ€ray or computed tomography (CT) images, is to improve diagnostic accuracy, leading to faster intervention. The most important and challenging problem here is to design an effective and robust diagnosis model. To this end, there are three challenges to overcome: (1) The lack of training samples limits the success of existing deepâ€learningâ€based method
Document: The goal of diagnosing the coronavirus disease 2019 (COVIDâ€19) from suspected pneumonia cases, that is, recognizing COVIDâ€19 from chest Xâ€ray or computed tomography (CT) images, is to improve diagnostic accuracy, leading to faster intervention. The most important and challenging problem here is to design an effective and robust diagnosis model. To this end, there are three challenges to overcome: (1) The lack of training samples limits the success of existing deepâ€learningâ€based methods. (2) Many public COVIDâ€19 data sets contain only a few images without fineâ€grained labels. (3) Due to the explosive growth of suspected cases, it is urgent and important to diagnose not only COVIDâ€19 cases but also the cases of other types of pneumonia that are similar to the symptoms of COVIDâ€19. To address these issues, we propose a novel framework called Unsupervised Metaâ€Learning with Selfâ€Knowledge Distillation to address the problem of differentiating COVIDâ€19 from pneumonia cases. During training, our model cannot use any true labels and aims to gain the ability of learning to learn by itself. In particular, we first present a deep diagnosis model based on a relation network to capture and memorize the relation among different images. Second, to enhance the performance of our model, we design a selfâ€knowledge distillation mechanism that distills knowledge within our model itself. Our network is divided into several parts, and the knowledge in the deeper parts is squeezed into the shallow ones. The final results are derived from our model by learning to compare the features of images. Experimental results demonstrate that our approach achieves significantly higher performance than other stateâ€ofâ€theâ€art methods. Moreover, we construct a new COVIDâ€19 pneumonia data set based on text mining, consisting of 2696 COVIDâ€19 images (347 Xâ€ray + 2349 CT), 10,155 images (9661 Xâ€ray + 494 CT) about other types of pneumonia, and the fineâ€grained labels of all. Our data set considers not only a bacterial infection or viral infection which causes pneumonia but also a viral infection derived from the influenza virus or coronavirus.
Search related documents:
Co phrase search for related documents- accuracy precision sensitivity specificity and loss function: 1, 2, 3, 4
- accuracy term and acute respiratory syndrome: 1, 2, 3
- accurately quickly and acute respiratory syndrome: 1, 2, 3, 4
- acid diagnosis and acute respiratory syndrome: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25
- acid diagnosis and loss function: 1
- acquisition method and acute respiratory syndrome: 1, 2
- acute respiratory syndrome and loss function: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25
Co phrase search for related documents, hyperlinks ordered by date