Fooling a Neural Network in Military Environments: Random Untargeted Adversarial Example

Cited 5 time in webofscience Cited 0 time in scopus
  • Hit : 221
  • Download : 0
Deep neural networks (DNNs) show superior performance in machine learning tasks such as image recognition, speech recognition, intrusion detection, and pattern analysis. However, an adversarial example, created by adding a little noise to the original sample, can cause misclassification by the DNN. As adversarial examples are a serious threat to DNNs, there has been much research into the generation of adversarial examples designed for attacking DNNs. The adversarial example attack is divided into two categories: targeted adversarial example and untargeted adversarial example. The targeted adversarial example attack causes machines to misinterpret an object as the attacker's desired class. In contrast, the untargeted adversarial example causes machines to misinterpret an object as an incorrect class. In this paper, we focus on an untargeted adversarial example scenario because it has less distortion from the original sample and a faster learning time than a targeted adversarial example scenario. However, there is a pattern problem in generating untargeted adversarial examples: Because of the similarity between the original class and specific classes, it may be possible for the defending system to determine the original class by analyzing the output classes of the untargeted adversarial examples. To overcome this problem, we propose a new method for generating untargeted adversarial examples, one that uses an arbitrary class in the generation process. For experimental datasets, we used MNIST and CIFAR10, and the Tensorflow library was employed as the machine learning library. Through our experiment, we show that the proposed method can generate random untargeted adversarial examples that do not focus on a specific class for a given original class, while keeping distortion to a minimum (1.99 and 42.32 on MNIST and CIFAR10, respectively) and maintaining a 100% attack success rate.
Publisher
IEEE(Communications Society)
Issue Date
2018-10-30
Language
English
Citation

Military Communications Conference 2018 (MILCOM 2018), pp.456 - 461

DOI
10.1109/MILCOM.2018.8599707
URI
http://hdl.handle.net/10203/246439
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 5 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0