Author: Tong, Liang; Chen, Zhengzhang; Ni, Jingchao; Cheng, Wei; Song, Dongjin; Chen, Haifeng; Vorobeychik, Yevgeniy
                    Title: FACESEC: A Fine-grained Robustness Evaluation Framework for Face Recognition Systems  Cord-id: 76e8xh95  Document date: 2021_4_8
                    ID: 76e8xh95
                    
                    Snippet: We present FACESEC, a framework for fine-grained robustness evaluation of face recognition systems. FACESEC evaluation is performed along four dimensions of adversarial modeling: the nature of perturbation (e.g., pixel-level or face accessories), the attacker's system knowledge (about training data and learning architecture), goals (dodging or impersonation), and capability (tailored to individual inputs or across sets of these). We use FACESEC to study five face recognition systems in both clos
                    
                    
                    
                     
                    
                    
                    
                    
                        
                            
                                Document: We present FACESEC, a framework for fine-grained robustness evaluation of face recognition systems. FACESEC evaluation is performed along four dimensions of adversarial modeling: the nature of perturbation (e.g., pixel-level or face accessories), the attacker's system knowledge (about training data and learning architecture), goals (dodging or impersonation), and capability (tailored to individual inputs or across sets of these). We use FACESEC to study five face recognition systems in both closed-set and open-set settings, and to evaluate the state-of-the-art approach for defending against physically realizable attacks on these. We find that accurate knowledge of neural architecture is significantly more important than knowledge of the training data in black-box attacks. Moreover, we observe that open-set face recognition systems are more vulnerable than closed-set systems under different types of attacks. The efficacy of attacks for other threat model variations, however, appears highly dependent on both the nature of perturbation and the neural network architecture. For example, attacks that involve adversarial face masks are usually more potent, even against adversarially trained models, and the ArcFace architecture tends to be more robust than the others.
 
  Search related documents: 
                                Co phrase  search for related documents- Try single phrases listed below for: 1
 
                                Co phrase  search for related documents, hyperlinks ordered by date