Random Untargeted Adversarial Example on Deep Neural Network

Cited 16 time in webofscience Cited 13 time in scopus
  • Hit : 484
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKwon, Hyunko
dc.contributor.authorKim, Yongchulko
dc.contributor.authorYoon, Hyunsooko
dc.contributor.authorChoi, Daeseonko
dc.date.accessioned2019-01-23T06:56:28Z-
dc.date.available2019-01-23T06:56:28Z-
dc.date.created2018-12-11-
dc.date.created2018-12-11-
dc.date.issued2018-12-
dc.identifier.citationSYMMETRY-BASEL, v.10, no.12-
dc.identifier.issn2073-8994-
dc.identifier.urihttp://hdl.handle.net/10203/250146-
dc.description.abstractDeep neural networks (DNNs) have demonstrated remarkable performance in machine learning areas such as image recognition, speech recognition, intrusion detection, and pattern analysis. However, it has been revealed that DNNs have weaknesses in the face of adversarial examples, which are created by adding a little noise to an original sample to cause misclassification by the DNN. Such adversarial examples can lead to fatal accidents in applications such as autonomous vehicles and disease diagnostics. Thus, the generation of adversarial examples has attracted extensive research attention recently. An adversarial example is categorized as targeted or untargeted. In this paper, we focus on the untargeted adversarial example scenario because it has a faster learning time and less distortion compared with the targeted adversarial example. However, there is a pattern vulnerability with untargeted adversarial examples: Because of the similarity between the original class and certain specific classes, it may be possible for the defending system to determine the original class by analyzing the output classes of the untargeted adversarial examples. To overcome this problem, we propose a new method for generating untargeted adversarial examples, one that uses an arbitrary class in the generation process. Moreover, we show that our proposed scheme can be applied to steganography. Through experiments, we show that our proposed scheme can achieve a 100% attack success rate with minimum distortion (1.99 and 42.32 using the MNIST and CIFAR10 datasets, respectively) and without the pattern vulnerability. Using a steganography test, we show that our proposed scheme can be used to fool humans, as demonstrated by the probability of their detecting hidden classes being equal to that of random selection.-
dc.languageEnglish-
dc.publisherMDPI-
dc.titleRandom Untargeted Adversarial Example on Deep Neural Network-
dc.typeArticle-
dc.identifier.wosid000454725100076-
dc.identifier.scopusid2-s2.0-85059017272-
dc.type.rimsART-
dc.citation.volume10-
dc.citation.issue12-
dc.citation.publicationnameSYMMETRY-BASEL-
dc.identifier.doi10.3390/sym10120738-
dc.contributor.localauthorYoon, Hyunsoo-
dc.contributor.nonIdAuthorKim, Yongchul-
dc.contributor.nonIdAuthorChoi, Daeseon-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthordeep neural network-
dc.subject.keywordAuthoradversarial example-
dc.subject.keywordAuthoruntargeted adversarial example-
dc.subject.keywordAuthorrandom selection-
dc.subject.keywordPlusEVASION ATTACK-
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 16 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0