Author: Lal, Sheeba; Rehman, Saeed Ur; Shah, Jamal Hussain; Meraj, Talha; Rauf, Hafiz Tayyab; DamaÅ¡eviÄius, Robertas; Mohammed, Mazin Abed; Abdulkareem, Karrar Hameed
                    Title: Adversarial Attack and Defence through Adversarial Training and Feature Fusion for Diabetic Retinopathy Recognition  Cord-id: 7786pvln  Document date: 2021_6_7
                    ID: 7786pvln
                    
                    Snippet: Due to the rapid growth in artificial intelligence (AI) and deep learning (DL) approaches, the security and robustness of the deployed algorithms need to be guaranteed. The security susceptibility of the DL algorithms to adversarial examples has been widely acknowledged. The artificially created examples will lead to different instances negatively identified by the DL models that are humanly considered benign. Practical application in actual physical scenarios with adversarial threats shows thei
                    
                    
                    
                     
                    
                    
                    
                    
                        
                            
                                Document: Due to the rapid growth in artificial intelligence (AI) and deep learning (DL) approaches, the security and robustness of the deployed algorithms need to be guaranteed. The security susceptibility of the DL algorithms to adversarial examples has been widely acknowledged. The artificially created examples will lead to different instances negatively identified by the DL models that are humanly considered benign. Practical application in actual physical scenarios with adversarial threats shows their features. Thus, adversarial attacks and defense, including machine learning and its reliability, have drawn growing interest and, in recent years, has been a hot topic of research. We introduce a framework that provides a defensive model against the adversarial speckle-noise attack, the adversarial training, and a feature fusion strategy, which preserves the classification with correct labelling. We evaluate and analyze the adversarial attacks and defenses on the retinal fundus images for the Diabetic Retinopathy recognition problem, which is considered a state-of-the-art endeavor. Results obtained on the retinal fundus images, which are prone to adversarial attacks, are 99% accurate and prove that the proposed defensive model is robust.
 
  Search related documents: 
                                
                                Co phrase  search for related documents, hyperlinks ordered by date