Adversarial attacks pose a significant threat to the reliability of biometric systems, particularly in security-critical applications such as identity verification and access control. Ensuring robustness against such attacks is essential for the safe deployment of face recognition technologies in real-world scenarios. To advance this goal, the 2025 Adversarial Attack Challenge for Secure Face Recognition was organized as part of the International Joint Conference on Biometrics (IJCB) 2025. The competition focused on two main tracks: Detection, where the objective was to determine whether a given face image is clean or adversarial, and Resilience, which aimed to evaluate recognition systems under adversarial perturbations. Participants were provided with a standardized dataset derived from CelebA and LFW, encompassing both clean samples and adversarial images crafted using ten diverse attack methods targeting evasion and impersonation scenarios. To ensure fairness and reproducibility, all models were trained solely on the data provided, with support from a custom open source adversarial attack package tailored for face recognition. In addition to benchmarking adversarial robustness, the challenge contributes to the research community by releasing the data set and the extensible attack package, allowing further investigation of secure and reliable face recognition systems.

Tremoço, J., Medvedev, I., Freitas, N., Costa, A., Nunes, D., Bunzel, N., et al. (2025). Adversarial Attack Challenge for Secure Face Recognition 2025. New York : IEEE [10.1109/IJCB65343.2025.11411446].

Adversarial Attack Challenge for Secure Face Recognition 2025

Lorenzo Pellegrini;Nicolò Di Domenico;Guido Borghi;
2025

Abstract

Adversarial attacks pose a significant threat to the reliability of biometric systems, particularly in security-critical applications such as identity verification and access control. Ensuring robustness against such attacks is essential for the safe deployment of face recognition technologies in real-world scenarios. To advance this goal, the 2025 Adversarial Attack Challenge for Secure Face Recognition was organized as part of the International Joint Conference on Biometrics (IJCB) 2025. The competition focused on two main tracks: Detection, where the objective was to determine whether a given face image is clean or adversarial, and Resilience, which aimed to evaluate recognition systems under adversarial perturbations. Participants were provided with a standardized dataset derived from CelebA and LFW, encompassing both clean samples and adversarial images crafted using ten diverse attack methods targeting evasion and impersonation scenarios. To ensure fairness and reproducibility, all models were trained solely on the data provided, with support from a custom open source adversarial attack package tailored for face recognition. In addition to benchmarking adversarial robustness, the challenge contributes to the research community by releasing the data set and the extensible attack package, allowing further investigation of secure and reliable face recognition systems.
2025
2025 IEEE International Joint Conference on Biometrics (IJCB)
1
10
Tremoço, J., Medvedev, I., Freitas, N., Costa, A., Nunes, D., Bunzel, N., et al. (2025). Adversarial Attack Challenge for Secure Face Recognition 2025. New York : IEEE [10.1109/IJCB65343.2025.11411446].
Tremoço, João; Medvedev, Iurii; Freitas, Nuno; Costa, Andreia; Nunes, Diogo; Bunzel, Niklas; Graner, Lukas; Göller, Nicolas; Pellegrini, Lorenzo; Di D...espandi
File in questo prodotto:
File Dimensione Formato  
Adversarial Attack Challenge for Secure Face Recognition 2025 - preprint.pdf

embargo fino al 02/03/2028

Tipo: Postprint / Author's Accepted Manuscript (AAM) - versione accettata per la pubblicazione dopo la peer-review
Licenza: Licenza per accesso libero gratuito
Dimensione 1.6 MB
Formato Adobe PDF
1.6 MB Adobe PDF   Visualizza/Apri   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1046070
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact