Selected article for: "activation function and feature vector"

Author: Xueyan Mei; Hao-Chih Lee; Kaiyue Diao; Mingqian Huang; Bin Lin; Chenyu Liu; Zongyu Xie; Yixuan Ma; Philip M. Robson; Michael Chung; Adam Bernheim; Venkatesh Mani; Claudia Calcagno; Kunwei Li; Shaolin Li; Hong Shan; Jian Lv; Tongtong Zhao; Junli Xia; Qihua Long; Sharon Steinberger; Adam Jacobi; Timothy Deyer; Marta Luksza; Fang Liu; Brent P. Little; Zahi A. Fayad; Yang Yang
Title: Artificial intelligence for rapid identification of the coronavirus disease 2019 (COVID-19)
  • Document date: 2020_4_17
  • ID: 79tozwzq_72
    Snippet: We trained a model to integrate CT imaging data and clinical information. We applied the global averaging layer to the last layers of the convolutional model described previously to derive a 512 dimensional feature vector to represent a CT image. A total of 12 clinical features (Table 1) of the same patient were concatenated with this feature vector. A MLP takes this combined feature vector as the input to predict the status of SARS-CoV-2. We use.....
    Document: We trained a model to integrate CT imaging data and clinical information. We applied the global averaging layer to the last layers of the convolutional model described previously to derive a 512 dimensional feature vector to represent a CT image. A total of 12 clinical features (Table 1) of the same patient were concatenated with this feature vector. A MLP takes this combined feature vector as the input to predict the status of SARS-CoV-2. We used a 3-layer MLP, each layer has 64 nodes and is composed by a batch normalization layer, a fully connected layer and a ReLU activation function. Normalized Gaussian noise was added at the input layers for data augmentation. The MLP was jointly trained with the CNN. We applied binary cross entropy to validate the predictions from both MLP and CNN during the training process. The sum of these two measurements was used as the overall objective function to train the joint model. We used the same optimization strategy of Model 1 to train the MLP and CNN, except that the learning rate was increased to 0.002.

    Search related documents:
    Co phrase search for related documents
    • activation function and batch normalization layer: 1
    • activation function and binary cross: 1, 2
    • activation function and convolutional model: 1, 2, 3
    • activation function and convolutional model layer: 1
    • activation function and entropy binary cross: 1, 2
    • activation function and feature vector: 1
    • activation function and fully connected layer: 1
    • activation function and input layer: 1
    • activation function and objective function: 1
    • activation function and ReLU activation function: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
    • activation function and training process: 1
    • averaging layer and batch normalization: 1
    • averaging layer and batch normalization layer: 1
    • averaging layer and global averaging layer: 1, 2
    • averaging layer and training process: 1
    • batch normalization and binary cross: 1
    • batch normalization and convolutional model: 1, 2, 3, 4
    • batch normalization and convolutional model layer: 1
    • batch normalization and CT image: 1