DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kwon, Hyun | ko |
dc.contributor.author | Kim, Yongchul | ko |
dc.contributor.author | Yoon, Hyunsoo | ko |
dc.contributor.author | Choi, Daeseon | ko |
dc.date.accessioned | 2018-08-20T07:59:44Z | - |
dc.date.available | 2018-08-20T07:59:44Z | - |
dc.date.created | 2018-08-11 | - |
dc.date.created | 2018-08-11 | - |
dc.date.created | 2018-08-11 | - |
dc.date.created | 2018-08-11 | - |
dc.date.created | 2018-08-11 | - |
dc.date.issued | 2018-08-23 | - |
dc.identifier.citation | 19th World International Conference on Information Security and Applications (WISA), pp.42 - 54 | - |
dc.identifier.uri | http://hdl.handle.net/10203/244922 | - |
dc.description.abstract | Deep neural networks (DNNs) offer superior performance in machine learning tasks such as image recognition, speech recognition, pattern analysis, and intrusion detection. In this paper, we propose a one-pixel adversarial example that is safe for friendly deep neural networks. By modifying only one pixel, our proposed method generates a one-pixel-safe adversarial example that can be misclassified by an enemy classifier and correctly classified by a friendly classifier. To verify the performance of the proposed method, we used the CIFAR-10 dataset, ResNet model classifiers, and the Tensorflow library in our experiments. Results show that the proposed method modified only one pixel to achieve success rates of 13.5% and 26.0% in targeted and untargeted attacks, respectively. The success rate is slightly lower than that of the conventional one-pixel method, which has success rates of 15% and 33.5% in targeted and untargeted attacks, respectively; however, this method protects 100% of the friendly classifiers. In addition, if the proposed method modifies five pixels, this method can achieve success rates of 20.5% and 52.0% in targeted and untargeted attacks, respectively. | - |
dc.language | English | - |
dc.publisher | Korea Institute of Information Security & Cryptology | - |
dc.title | One-Pixel Adversarial Example that is Safe for Friendly Deep Neural Networks | - |
dc.type | Conference | - |
dc.identifier.wosid | 000766408800004 | - |
dc.identifier.scopusid | 2-s2.0-85065033964 | - |
dc.type.rims | CONF | - |
dc.citation.beginningpage | 42 | - |
dc.citation.endingpage | 54 | - |
dc.citation.publicationname | 19th World International Conference on Information Security and Applications (WISA) | - |
dc.identifier.conferencecountry | KO | - |
dc.identifier.conferencelocation | Lotte City Hotel, Jeju Island | - |
dc.identifier.doi | 10.1007/978-3-030-17982-3_4 | - |
dc.contributor.localauthor | Yoon, Hyunsoo | - |
dc.contributor.nonIdAuthor | Kim, Yongchul | - |
dc.contributor.nonIdAuthor | Choi, Daeseon | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.