Friend-safe evasion attack: An adversarial example that is correctly recognized by a friendly classifier

Cited 19 time in webofscience Cited 0 time in scopus
  • Hit : 644
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKwon, Hyunko
dc.contributor.authorKim, Yongchulko
dc.contributor.authorPark, Ki-Woongko
dc.contributor.authorYoon, Hyunsooko
dc.contributor.authorChoi, Daeseonko
dc.date.accessioned2018-09-18T06:37:09Z-
dc.date.available2018-09-18T06:37:09Z-
dc.date.created2018-09-11-
dc.date.created2018-09-11-
dc.date.created2018-09-11-
dc.date.created2018-09-11-
dc.date.created2018-09-11-
dc.date.issued2018-08-
dc.identifier.citationCOMPUTERS & SECURITY, v.78, pp.380 - 397-
dc.identifier.issn0167-4048-
dc.identifier.urihttp://hdl.handle.net/10203/245667-
dc.description.abstractDeep neural networks (DNNs) have been applied in several useful services, such as image recognition, intrusion detection, and pattern analysis of machine learning tasks. Recently proposed adversarial examples-slightly modified data that lead to incorrect classification-are a severe threat to the security of DNNs. In some situations, however, an adversarial example might be useful, such as when deceiving an enemy classifier on the battlefield. In such a scenario, it is necessary that a friendly classifier not be deceived. In this paper, we propose a friend-safe adversarial example, meaning that the friendly machine can classify the adversarial example correctly. To produce such examples, a transformation is carried out to minimize the probability of incorrect classification by the friend and that of correct classification by the adversary. We suggest two configurations for the scheme: targeted and untargeted class attacks. We performed experiments with this scheme using the MNIST and CIFAR10 datasets. Our proposed method shows a 100% attack success rate and 100% friend accuracy with only a small distortion: 2.18 and 1.54 for the two respective MNIST configurations, and 49.02 and 27.61 for the two respective CIFAR10 configurations. Additionally, we propose a new covert channel scheme and a mixed battlefield application for consideration in further applications.-
dc.languageEnglish-
dc.publisherELSEVIER ADVANCED TECHNOLOGY-
dc.titleFriend-safe evasion attack: An adversarial example that is correctly recognized by a friendly classifier-
dc.typeArticle-
dc.identifier.wosid000447358700026-
dc.identifier.scopusid2-s2.0-85052311348-
dc.type.rimsART-
dc.citation.volume78-
dc.citation.beginningpage380-
dc.citation.endingpage397-
dc.citation.publicationnameCOMPUTERS & SECURITY-
dc.identifier.doi10.1016/j.cose.2018.07.015-
dc.contributor.localauthorYoon, Hyunsoo-
dc.contributor.nonIdAuthorKim, Yongchul-
dc.contributor.nonIdAuthorPark, Ki-Woong-
dc.contributor.nonIdAuthorChoi, Daeseon-
dc.description.isOpenAccessN-
dc.type.journalArticleReview-
dc.subject.keywordAuthorDeep Neural Network-
dc.subject.keywordAuthorEvasion Attack-
dc.subject.keywordAuthorAdversarial Example-
dc.subject.keywordAuthorCovert Channel-
dc.subject.keywordAuthorMachine Learning-
dc.subject.keywordPlusDEEP NEURAL-NETWORKS-
dc.subject.keywordPlusSECURITY-
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 19 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0