Black Box Adversarial Defense Based on Image Denoising and Pix2Pix

Rui, Zhenyong and Gong, Xiugang (2023) Black Box Adversarial Defense Based on Image Denoising and Pix2Pix. Journal of Computer and Communications, 11 (12). pp. 14-30. ISSN 2327-5219

[thumbnail of jcc_2023121814500445.pdf] Text
jcc_2023121814500445.pdf - Published Version

Download (698kB)

Abstract

Deep Neural Networks (DNN) are widely utilized due to their outstanding performance, but the susceptibility to adversarial attacks poses significant security risks, making adversarial defense research crucial in the field of AI security. Currently, robustness defense techniques for models often rely on adversarial training, a method that tends to only defend against specific types of attacks and lacks strong generalization. In response to this challenge, this paper proposes a black-box defense method based on Image Denoising and Pix2Pix (IDP) technology. This method does not require prior knowledge of the specific attack type and eliminates the need for cumbersome adversarial training. When making predictions on unknown samples, the IDP method first undergoes denoising processing, followed by inputting the processed image into a trained Pix2Pix model for image transformation. Finally, the image generated by Pix2Pix is input into the classification model for prediction. This versatile defense approach demonstrates excellent defensive performance against common attack methods such as FGSM, I-FGSM, DeepFool, and UPSET, showcasing high flexibility and transferability. In summary, the IDP method introduces new perspectives and possibilities for adversarial sample defense, alleviating the limitations of traditional adversarial training methods and enhancing the overall robustness of models.

Item Type: Article
Subjects: Science Repository > Computer Science
Depositing User: Managing Editor
Date Deposited: 10 Jan 2024 03:43
Last Modified: 10 Jan 2024 03:43
URI: http://research.manuscritpub.com/id/eprint/3842

Actions (login required)

View Item
View Item