Friend-safe adversarial examples in an evasion attack on a deep neural network

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 322
  • Download : 0
Deep neural networks (DNNs) perform effectively in machine learning tasks such as image recognition, intrusion detection, and pattern analysis. Recently proposed adversarial examples—slightly modified data that lead to incorrect classification—are a severe threat to the security of DNNs. However, in some situations, adversarial examples might be useful, i.e., for deceiving an enemy classifier on a battlefield. In that case, friendly classifiers should not be deceived. In this paper, we propose adversarial examples that are friend-safe, which means that friendly machines can classify the adversarial example correctly. To make such examples, the transformation is carried out to minimize the friend’s wrong classification and the adversary’s correct classification. We suggest two configurations of the scheme: targeted and untargeted class attacks. In experiments using the MNIST dataset, the proposed method shows a 100% attack success rate and 100% friendly accuracy with little distortion (2.18 and 1.53 for each configuration, respectively). Finally, we propose a mixed battlefield application and a new covert channel scheme.
Publisher
Springer Verlag
Issue Date
2017-12-01
Language
English
Citation

20th International Conference on International Conference on Information Security and Cryptology, ICISC 2017, pp.351 - 367

DOI
10.1007/978-3-319-78556-1_20
URI
http://hdl.handle.net/10203/241181
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0