Selected article for: "batch size and cross entropy loss function"

Author: Liu, Yuliang; Zhang, Quan; Zhao, Geng; Liu, Guohua; Liu, Zhiang
Title: Deep Learning-Based Method of Diagnosing Hyperlipidemia and Providing Diagnostic Markers Automatically
  • Document date: 2020_3_11
  • ID: 1r4gm2d4_31
    Snippet: During the training process, the size of mini-batch was 20, the loss function was the cross-entropy cost function, and Adam algorithm was used to optimize the global parameters (ε=0.001,p 1 =0.9,p 2 =0.999, δ=10 À8 ). At the same time, one-hot technology was also applied to the representation of data labels. Each dimension of the output vector represents a different health condition, only the corresponding element is 1 and the rest is 0. Becau.....
    Document: During the training process, the size of mini-batch was 20, the loss function was the cross-entropy cost function, and Adam algorithm was used to optimize the global parameters (ε=0.001,p 1 =0.9,p 2 =0.999, δ=10 À8 ). At the same time, one-hot technology was also applied to the representation of data labels. Each dimension of the output vector represents a different health condition, only the corresponding element is 1 and the rest is 0. Because this paper distinguished two kinds of health conditions, the two-dimensional vector was used to code the data label, the normal diagnosis result was coded to 10, and the diagnosis result of hyperlipidemia was coded to 01. Onehot technology is helpful to improve the robustness of the model. At the same time, the sigmoid function was used in the classification function, because of the binary classification task. As mentioned above, the cross-entropy was used as loss function, the principle of cross-entropy is shown in Equation 8

    Search related documents:
    Co phrase search for related documents
    • dimensional vector and output vector: 1
    • dimensional vector and training process: 1
    • global parameter and model robustness: 1
    • global parameter and sigmoid function: 1
    • global parameter and sigmoid function classification function: 1
    • loss function and model robustness: 1, 2
    • loss function and sigmoid function: 1, 2
    • loss function and training process: 1, 2, 3, 4, 5
    • mini batch size and training process: 1
    • sigmoid function and training process: 1