Selected article for: "loss function and model train"

Author: Guillaume Chassagnon; Maria Vakalopoulou; Enzo Battistella; Stergios Christodoulidis; Trieu-Nghi Hoang-Thi; Severine Dangeard; Eric Deutsch; Fabrice Andre; Enora Guillo; Nara Halm; Stefany El Hajj; Florian Bompard; Sophie Neveu; Chahinez Hani; Ines Saab; Alienor Campredon; Hasmik Koulakian; Souhail Bennani; Gael Freche; Aurelien Lombard; Laure Fournier; Hippolyte Monnier; Teodor Grand; Jules Gregory; Antoine Khalil; Elyas Mahdjoub; Pierre-Yves Brillet; Stephane Tran Ba; Valerie Bousson; Marie-Pierre Revel; Nikos Paragios
Title: AI-Driven CT-based quantification, staging and short-term outcome prediction of COVID-19 pneumonia
  • Document date: 2020_4_22
  • ID: nxm1jr0x_25
    Snippet: Moreover, segmentation masks of the lung and heart of all patients were extracted by using ART-Plan software (Thera-Panacea, Paris, France). ART-Plan is a CE-marked solution for automatic annotation of organs, harnessing a combination of anatomically preserving and deep learning concepts. This software has been trained using a combination of a transformation and an image loss. The transformation loss penalizes the normalized error between the pre.....
    Document: Moreover, segmentation masks of the lung and heart of all patients were extracted by using ART-Plan software (Thera-Panacea, Paris, France). ART-Plan is a CE-marked solution for automatic annotation of organs, harnessing a combination of anatomically preserving and deep learning concepts. This software has been trained using a combination of a transformation and an image loss. The transformation loss penalizes the normalized error between the prediction of the network and the affine registration parameters depicting the registration between the source volume and the whole body scanned. These parameters are determined automatically using a downhill simplex optimization approach. The second loss function of the network involved an image similarity function -the zero-normalized cross correlation loss -that seeks to create an optimal visual correspondence between the observed CT values of the source volume and the corresponding ones at the full body CT reference volume. This network was trained using as input a combination of 360, 000 pairs of CT scans of all anatomies and full body CT scans. These projections used to determine the organs being present on the test volume. Using the transformation between the test volume and the full body CT, we were able to determine a surrounding patch for each organ being present in the volume. These patches were used to train the deep learning model for each full body CT. The next step consisted of creating multiple annotations on the different reference spaces, and for that a 3D fully convolutional architecture was trained for every reference anatomy. This architecture takes as input the annotations for each organ once mapped to the reference anatomy and then seeks to determine for each anatomy a network that can optimally segment the organ of interest similar to the AtlasNet framework used for the disease segmentation. This information was applied for every organ of interest presented in the input CT Scan. In average, 6, 600 samples were used for training per organ after data augmentation. These networks were trained using a conventional dice loss. The final organ segmentation was achieved through a winner takes all approach over an ensemble networks approach. For each organ, and for each full body reference CT a specific network was built, and the segmentation masks generated for each network were mapped back to the original space. The consensus of the recommendations of the different subnetworks was used to determine the optimal label at the voxel level.

    Search related documents:
    Co phrase search for related documents
    • Try single phrases listed below for: 1