Adversarial Examples Detection Using Robustness against Multi-step Gaussian Noise

Yuto Kanda Kenji Hatano
雑誌・プロシーディングス名: Proceedings of the 13th International Conference on Software and Computer Applications (ICSCA2024)
開催地(都道府県): Bali
国名(英語): Indonesia
言語: English
ページ: 178-184
出版年: 2024
出版月: 2
出版日: 2024-02-03
DOI: 10.1145/3651781.3651808
📄 PDFを開く 🌐 詳細ページへ
       

概要

Deep Neural Networks have been wildly successful in various tasks. However, they are known to misclassify against cleverly created input called adversarial examples easily. To realize reliable DNNs, various methods for detecting adversarial examples have been proposed to eliminate adversarial examples in advance. One of them, the input destruction approach, focuses on the difference in robustness between adversarial and clean examples and performs the adversarial example detection based on the robustness of the input image when it is intentionally destroyed by Gaussian noise. Existing detection methods using Gaussian noise use a single, empirically determined Gaussian noise, raising the question of whether or not they can truly maximize anomalies. We propose a novel AE detection method using multiple and multi-step Gaussian noise, which extends the existing single Gaussian noise-based input destruction approach so as to maximize the anomaly of adversarial examples.

引用情報

Yuto Kanda, Kenji Hatano, Adversarial Examples Detection Using Robustness against Multi-step Gaussian Noise, Proceedings of the 13th International Conference on Software and Computer Applications (ICSCA2024), pp.178-184, 2024-02-03, DOI: 10.1145/3651781.3651808.

Iconic One Theme | Powered by Wordpress