One-Pixel Adversarial Example that is Safe for Friendly Deep Neural Networks

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 494
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKwon, Hyunko
dc.contributor.authorKim, Yongchulko
dc.contributor.authorYoon, Hyunsooko
dc.contributor.authorChoi, Daeseonko
dc.date.accessioned2018-08-20T07:59:44Z-
dc.date.available2018-08-20T07:59:44Z-
dc.date.created2018-08-11-
dc.date.created2018-08-11-
dc.date.created2018-08-11-
dc.date.created2018-08-11-
dc.date.created2018-08-11-
dc.date.issued2018-08-23-
dc.identifier.citation19th World International Conference on Information Security and Applications (WISA), pp.42 - 54-
dc.identifier.urihttp://hdl.handle.net/10203/244922-
dc.description.abstractDeep neural networks (DNNs) offer superior performance in machine learning tasks such as image recognition, speech recognition, pattern analysis, and intrusion detection. In this paper, we propose a one-pixel adversarial example that is safe for friendly deep neural networks. By modifying only one pixel, our proposed method generates a one-pixel-safe adversarial example that can be misclassified by an enemy classifier and correctly classified by a friendly classifier. To verify the performance of the proposed method, we used the CIFAR-10 dataset, ResNet model classifiers, and the Tensorflow library in our experiments. Results show that the proposed method modified only one pixel to achieve success rates of 13.5% and 26.0% in targeted and untargeted attacks, respectively. The success rate is slightly lower than that of the conventional one-pixel method, which has success rates of 15% and 33.5% in targeted and untargeted attacks, respectively; however, this method protects 100% of the friendly classifiers. In addition, if the proposed method modifies five pixels, this method can achieve success rates of 20.5% and 52.0% in targeted and untargeted attacks, respectively.-
dc.languageEnglish-
dc.publisherKorea Institute of Information Security & Cryptology-
dc.titleOne-Pixel Adversarial Example that is Safe for Friendly Deep Neural Networks-
dc.typeConference-
dc.identifier.wosid000766408800004-
dc.identifier.scopusid2-s2.0-85065033964-
dc.type.rimsCONF-
dc.citation.beginningpage42-
dc.citation.endingpage54-
dc.citation.publicationname19th World International Conference on Information Security and Applications (WISA)-
dc.identifier.conferencecountryKO-
dc.identifier.conferencelocationLotte City Hotel, Jeju Island-
dc.identifier.doi10.1007/978-3-030-17982-3_4-
dc.contributor.localauthorYoon, Hyunsoo-
dc.contributor.nonIdAuthorKim, Yongchul-
dc.contributor.nonIdAuthorChoi, Daeseon-
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0