Selected article for: "human speech and network model"

Author: Lella, Kranthi Kumar; Pja, Alphonse
Title: Automatic COVID-19 disease diagnosis using 1D convolutional neural network and augmentation with human respiratory sound based on parameters: cough, breath, and voice
  • Cord-id: jl5wiydl
  • Document date: 2021_3_10
  • ID: jl5wiydl
    Snippet: The issue in respiratory sound classification has attained good attention from the clinical scientists and medical researcher's group in the last year to diagnosing COVID-19 disease. To date, various models of Artificial Intelligence (AI) entered into the real-world to detect the COVID-19 disease from human-generated sounds such as voice/speech, cough, and breath. The Convolutional Neural Network (CNN) model is implemented for solving a lot of real-world problems on machines based on Artificial
    Document: The issue in respiratory sound classification has attained good attention from the clinical scientists and medical researcher's group in the last year to diagnosing COVID-19 disease. To date, various models of Artificial Intelligence (AI) entered into the real-world to detect the COVID-19 disease from human-generated sounds such as voice/speech, cough, and breath. The Convolutional Neural Network (CNN) model is implemented for solving a lot of real-world problems on machines based on Artificial Intelligence (AI). In this context, one dimension (1D) CNN is suggested and implemented to diagnose respiratory diseases of COVID-19 from human respiratory sounds such as a voice, cough, and breath. An augmentation-based mechanism is applied to improve the preprocessing performance of the COVID-19 sounds dataset and to automate COVID-19 disease diagnosis using the 1D convolutional network. Furthermore, a DDAE (Data De-noising Auto Encoder) technique is used to generate deep sound features such as the input function to the 1D CNN instead of adopting the standard input of MFCC (Mel-frequency cepstral coefficient), and it is performed better accuracy and performance than previous models. RESULTS: As a result, around 4% accuracy is achieved than traditional MFCC. We have classified COVID-19 sounds, asthma sounds, and regular healthy sounds using a 1D CNN classifier and shown around 90% accuracy to detect the COVID-19 disease from respiratory sounds. CONCLUSION: A Data De-noising Auto Encoder (DDAE) was adopted to extract the acoustic sound signals in-depth features instead of traditional MFCC. The proposed model improves efficiently to classify COVID-19 sounds for detecting COVID-19 positive symptoms.

    Search related documents: