Author: Dai, Wei; Berleant, Daniel
Title: Benchmarking Robustness of Deep Learning Classifiers Using Two-Factor Perturbation Cord-id: 1az2ega6 Document date: 2021_3_2
ID: 1az2ega6
Snippet: The accuracy of DL classifiers is unstable in that it often changes significantly when retested on adversarial images, imperfect images, or perturbed images. This paper adds to the small but fundamental body of work on benchmarking the robustness of DL classifiers on defective images. Unlike existed single-factor digital perturbation work, we provide state-of-the-art two-factor perturbation that provides two natural perturbations on images applied in different sequences. The two-factor perturbat
Document: The accuracy of DL classifiers is unstable in that it often changes significantly when retested on adversarial images, imperfect images, or perturbed images. This paper adds to the small but fundamental body of work on benchmarking the robustness of DL classifiers on defective images. Unlike existed single-factor digital perturbation work, we provide state-of-the-art two-factor perturbation that provides two natural perturbations on images applied in different sequences. The two-factor perturbation includes (1) two digital perturbations (Salt&pepper noise and Gaussian noise) applied in both sequences. (2) one digital perturbation (salt&pepper noise) and a geometric perturbation (rotation) applied in different sequences. To measure robust DL classifiers, previous scientists provided 15 types of single-factor corruption. We created 69 benchmarking image sets, including a clean set, sets with single factor perturbations, and sets with two-factor perturbation conditions. To be best of our knowledge, this is the first report that two-factor perturbed images improves both robustness and accuracy of DL classifiers. Previous research evaluating deep learning (DL) classifiers has often used top-1/top-5 accuracy, so researchers have usually offered tables, line diagrams, and bar charts to display accuracy of DL classifiers. But these existed approaches cannot quantitively evaluate robustness of DL classifiers. We innovate a new two-dimensional, statistical visualization tool, including mean accuracy and coefficient of variation (CV), to benchmark the robustness of DL classifiers. All source codes and related image sets are shared on websites (http://cslinux.semo.edu/david/data.html or https://github.com/daiweiworking/RobustDeepLearningUsingPerturbations ) to support future academic research and industry projects.
Search related documents:
Co phrase search for related documents- academic research and lower quality: 1, 2, 3
- academic research and machine learning: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14
- academic research and machine learning performance: 1, 2, 3
- accuracy mean and low accuracy: 1, 2, 3, 4
- accuracy mean and low learning: 1
- accuracy mean and machine learning: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27
- accuracy mean and machine learning performance: 1
- accuracy measure and low accuracy: 1, 2
- accuracy measure and low standard: 1
- accuracy measure and machine learning: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22
- accuracy reduce and low accuracy: 1
- accuracy reduce and machine learning: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
- accuracy reduce and machine learning performance: 1
- accuracy standard deviation and low accuracy: 1
- accuracy standard deviation and machine learning: 1, 2, 3
- accuracy value and low accuracy: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
Co phrase search for related documents, hyperlinks ordered by date