Selected article for: "classification performance and low number"

Author: Saban Ozturk; Umut Ozkaya; Mucahid Barstugan
Title: Classification of Coronavirus Images using Shrunken Features
  • Document date: 2020_4_6
  • ID: 2l1zw19o_58
    Snippet: For this reason, classifier performance can be misleading. However, when all comparisons are examined, the common contribution cannot be ignored. The results of the sAE method, which produces favorable results for many studies in the literature, are surprising. The main reasons for this are the low number of samples, the imbalance between classes, and the closeness in synthetic data. When this situation causes in-class affinity, it creates a code.....
    Document: For this reason, classifier performance can be misleading. However, when all comparisons are examined, the common contribution cannot be ignored. The results of the sAE method, which produces favorable results for many studies in the literature, are surprising. The main reasons for this are the low number of samples, the imbalance between classes, and the closeness in synthetic data. When this situation causes in-class affinity, it creates a code generation problem for sAE. It appears that it is not appropriate to use a CNN architecture for the training of such datasets, which is insufficient even for a shallow sAE architecture. Similar to the approach to hand-crafted feature extraction techniques, PCA architecture was considered instead of the sAE architecture. Since the PCA architecture can operate independently from the number of samples, it has provided very successful classification performance.

    Search related documents:
    Co phrase search for related documents
    • classification performance and CNN architecture: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
    • classification performance and dataset training: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12
    • classification performance and feature extraction: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35
    • classification performance and feature extraction technique: 1, 2
    • classification performance and literature study: 1
    • classification performance and low number: 1, 2
    • classification performance and sae architecture: 1
    • classification performance and sae method: 1, 2
    • classification performance and sample number: 1
    • classifier performance and CNN architecture: 1, 2
    • classifier performance and dataset training: 1, 2, 3, 4
    • classifier performance and feature extraction: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12
    • classifier performance and feature extraction technique: 1
    • classifier performance and sample number: 1, 2
    • CNN architecture and dataset training: 1, 2, 3, 4, 5, 6, 7
    • CNN architecture and feature extraction: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
    • CNN architecture and literature study: 1