Citation: | Zhang C C, Wang S, Wang W Y, et al. Adversarial background attacks in a limited area for CNN based face recognition[J]. Opto-Electron Eng, 2023, 50(1): 220266. doi: 10.12086/oee.2023.220266 |
[1] | Lee S, Woo T, Lee S H. SBNet: segmentation-based network for natural language-based vehicle search[C]//2021IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021: 4049−4055. https://doi.org/10.1109/CVPRW53098.2021.00457. |
[2] | 孙锐, 单晓全, 孙琦景, 等. 双重对比学习框架下近红外[J]. 光电工程, 2022, 49(4): 210317. doi: 10.12086/oee.2022.210317 Sun R, Shan X Q, Sun Q J, et al. NIR-VIS face image translation method with dual contrastive learning framework[J]. Opto-Electron Eng, 2022, 49(4): 210317. doi: 10.12086/oee.2022.210317 |
[3] | Meng Q E, Shin'ichi S. ADINet: attribute driven incremental network for retinal image classification[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 4032–4041. https://doi.org/10.1109/CVPR42600.2020.00409. |
[4] | Singh V, Hari S K S, Tsai T, et al. Simulation driven design and test for safety of AI based autonomous vehicles[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021: 122−128. https://doi.org/10.1109/CVPRW53098.2021.00022. |
[5] | Liao M H, Zheng S S, Pan S X, et al. Deep-learning-based ciphertext-only attack on optical double random phase encryption[J]. Opto-Electron Adv, 2021, 4(5): 200016. doi: 10.29026/oea.2021.200016 |
[6] | Ma T G, Tobah M, Wang H Z, et al. Benchmarking deep learning-based models on nanophotonic inverse design problems[J]. Opto-Electron Sci, 2022, 1(1): 210012. doi: 10.29026/oes.2022.210012 |
[7] | Raji I D, Fried G. About face: a survey of facial recognition evaluation[Z]. arXiv: 2102.00813, 2021. https://arxiv.org/abs/2102.00813. |
[8] | Pesent J. An update on our use of face recognition[EB/OL]. (2021-11-02). https://about.fb.com/news/2021/11/update-on-use-of-face-recognition/. |
[9] | Sun Q R, Ma L Q, Oh S J, et al. Natural and Effective Obfuscation by Head Inpainting[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018: 5050–5059. https://doi.org/10.1109/CVPR.2018.00530. |
[10] | Wright E. The future of facial recognition is not fully known: developing privacy and security regulatory mechanisms for facial recognition in the retail sector[J]. Fordham Intell Prop Media Ent L J, 2019, 29: 611. |
[11] | Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks[C]//2nd International Conference on Learning Representations, 2014. |
[12] | Madry A, Makelov A, Schmidt L, et al. Towards deep learning models resistant to adversarial attacks[C]//6th International Conference on Learning Representations, 2018. |
[13] | Goodfellow I J, Shlens J, Szegedy C. Explaining and harnessing adversarial examples[C]//3rd International Conference on Learning Representations, 2015. |
[14] | Karmon D, Zoran D, Goldberg Y. LaVAN: localized and visible adversarial noise[C]//Proceedings of the 35th International Conference on Machine Learning, 2018: 2512–2520. |
[15] | Wu D X, Wang Y S, Xia S T, et al. Skip connections matter: on the transferability of adversarial examples generated with ResNets[C]//8th International Conference on Learning Representations, 2020. |
[16] | Brown T B, Mané D, Roy A, et al. Adversarial patch[Z]. arXiv: 1712.09665, 2017. https://arxiv.org/abs/1712.09665. |
[17] | Kurakin A, Goodfellow I J, Bengio S. Adversarial examples in the physical world[C]//5th International Conference on Learning Representations, 2017. |
[18] | Athalye A, Engstrom L, Ilyas A. Synthesizing robust adversarial examples[C]//Proceedings of the 35th International Conference on Machine Learning, 2018: 284–293. |
[19] | Pautov M, Melnikov G, Kaziakhmedov E, et al. On adversarial patches: real-world attack on ArcFace-100 face recognition system[C]//2019 International Multi-Conference on Engineering, Computer and Information Sciences, 2019: 391–396. |
[20] | Komkov S, Petiushko A. AdvHat: real-world adversarial attack on ArcFace face ID system[C]//2020 25th International Conference on Pattern Recognition, 2021: 819–826. https://doi.org/10.1109/ICPR48806.2021.9412236. |
[21] | Nguyen D L, Arora S S, Wu Y H, et al. Adversarial light projection attacks on face recognition systems: a feasibility study[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020: 3548−3556. https://doi.org/10.1109/CVPRW50498.2020.00415. |
[22] | Jan S T K, Messou J, Lin Y C, et al. Connecting the digital and physical world: improving the robustness of adversarial attacks[C]//Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, 2019: 119. https://doi.org/10.1609/aaai.v33i01.3301962. |
[23] | Moosavi-Dezfooli S M, Fawzi A, Frossard P. DeepFool: a simple and accurate method to fool deep neural networks[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition, 2016: 2574−2582. https://doi.org/10.1109/CVPR.2016.282. |
[24] | Su J W, Vargas D V, Sakurai K. One pixel attack for fooling deep neural networks[J]. IEEE Trans Evol Comput, 2019, 23(5): 828−841. doi: 10.1109/TEVC.2019.2890858 |
[25] | Sharif M, Bhagavatula S, Bauer L, et al. Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition[C]//Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 2016: 1528–1540. https://doi.org/10.1145/2976749.2978392. |
[26] | Xu K D, Zhang G Y, Liu S J, et al. Adversarial T-shirt! Evading person detectors in a physical world[C]//Proceedings of the 16th European Conference on Computer Vision, 2020: 665–681. https://doi.org/10.1007/978-3-030-58558-7_39. |
[27] | Rahmati A, Moosavi-Dezfooli S M, Frossard P, et al. GeoDA: a geometric framework for black-box adversarial attacks[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 8443–8452. https://doi.org/10.1109/CVPR42600.2020.00847. |
[28] | Sun Y, Wang X G, Tang X O. Deep convolutional network cascade for facial point detection[C]//2013 IEEE Conference on Computer Vision and Pattern Recognition, 2013: 3476–3483. https://doi.org/10.1109/CVPR.2013.446. |
[29] | Wang J F, Yuan Y, Yu G. Face attention network: an effective face detector for the occluded faces[Z]. arXiv: 1711.07246, 2017. https://arxiv.org/abs/1711.07246. |
[30] | Parkhi O M, Vedaldi A, Zisserman A. Deep face recognition[C]//Proceedings of the British Machine Vision Conference 2015, 2015: 41.1–41.12. |
[31] | Peng H Y, Yu S Q. A systematic IoU-related method: beyond simplified regression for better localization[J]. IEEE Trans Image Process, 2021, 30: 5032−5044. doi: 10.1109/TIP.2021.3077144 |
[32] | Duan R J, Ma X J, Wang Y S, et al. Adversarial camouflage: hiding physical-world attacks with natural styles[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 997−1005. https://doi.org/10.1109/CVPR42600.2020.00108. |
Scheme of facial adversarial attacks. Panel A is a physical foreground attack with an adversarial patch (patch image from Mikhail et al. [19]); Panel B is a physical adversarial background attack, and panel C is a digital adversarial background attack. Every attack approach aims to mislead a face recognizer with an incorrect class using adversarial examples
The scheme of generating an adversarial patch in BALA. This scheme mainly consists of three parts, mask generation, perturbation generation, and perturbation further improving. T(•) represents a set of transformations
The illustration of mask generation. (a) Cropped face image; (b) Blue points are mandible points of the face, and the green lines represent the maximum outside rectangular; (c) The mask with a white patch represents the location of an adversarial patch, and the red point represents the specific candidate
The generation process of an adversarial patch. Two different loss functions are used in iterations.
The diverse BALA adversarial patches. (a) Generated color patch without graying and averaging; (b) Generated gray patch without averaging; (c) Generated gray patch with averaging; Images (d), (e), (f) are corresponding re-taken images (a), (b), (c), respectively
The real-world experiment setup. A is a camera and B is the tripod; C is the electronic screen background and D is the foreground person
The pipeline of two background adversarial attack experiments. The blue part is photo re-taken experiment and green part is the real-world experiment
The adversarial examples generated by LaVAN, BALA, and Adv-patch in re-taken experiment. (a) is the original face image; (b) Present re-taking photos after displaying the adversarial patches on the background; (c) Present images of incorrect output classes from the VGG-FACE
The adversarial examples generated by BALA in the real-world experiment. (a) The original photos; (b) Present re-taking photos after displaying the adversarial patches on the background; (c) Present adversarial examples from the cropped faces; (d) Present face images of incorrect output classes of the VGG-FACE network
The diverse averaging images of BALA with graying. (a) Generated patch using no averaging approach; (b) Generated patch by averaging pixels in 2 × 2 region; (c) Generated patch by averaging pixels in 4 × 4 region