DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kwon, Hyun | ko |
dc.contributor.author | Kim, Yongchul | ko |
dc.contributor.author | Park, Ki-Woong | ko |
dc.contributor.author | Yoon, Hyunsoo | ko |
dc.contributor.author | Choi, Daeseon | ko |
dc.date.accessioned | 2018-09-18T06:37:09Z | - |
dc.date.available | 2018-09-18T06:37:09Z | - |
dc.date.created | 2018-09-11 | - |
dc.date.created | 2018-09-11 | - |
dc.date.created | 2018-09-11 | - |
dc.date.created | 2018-09-11 | - |
dc.date.created | 2018-09-11 | - |
dc.date.issued | 2018-08 | - |
dc.identifier.citation | COMPUTERS & SECURITY, v.78, pp.380 - 397 | - |
dc.identifier.issn | 0167-4048 | - |
dc.identifier.uri | http://hdl.handle.net/10203/245667 | - |
dc.description.abstract | Deep neural networks (DNNs) have been applied in several useful services, such as image recognition, intrusion detection, and pattern analysis of machine learning tasks. Recently proposed adversarial examples-slightly modified data that lead to incorrect classification-are a severe threat to the security of DNNs. In some situations, however, an adversarial example might be useful, such as when deceiving an enemy classifier on the battlefield. In such a scenario, it is necessary that a friendly classifier not be deceived. In this paper, we propose a friend-safe adversarial example, meaning that the friendly machine can classify the adversarial example correctly. To produce such examples, a transformation is carried out to minimize the probability of incorrect classification by the friend and that of correct classification by the adversary. We suggest two configurations for the scheme: targeted and untargeted class attacks. We performed experiments with this scheme using the MNIST and CIFAR10 datasets. Our proposed method shows a 100% attack success rate and 100% friend accuracy with only a small distortion: 2.18 and 1.54 for the two respective MNIST configurations, and 49.02 and 27.61 for the two respective CIFAR10 configurations. Additionally, we propose a new covert channel scheme and a mixed battlefield application for consideration in further applications. | - |
dc.language | English | - |
dc.publisher | ELSEVIER ADVANCED TECHNOLOGY | - |
dc.title | Friend-safe evasion attack: An adversarial example that is correctly recognized by a friendly classifier | - |
dc.type | Article | - |
dc.identifier.wosid | 000447358700026 | - |
dc.identifier.scopusid | 2-s2.0-85052311348 | - |
dc.type.rims | ART | - |
dc.citation.volume | 78 | - |
dc.citation.beginningpage | 380 | - |
dc.citation.endingpage | 397 | - |
dc.citation.publicationname | COMPUTERS & SECURITY | - |
dc.identifier.doi | 10.1016/j.cose.2018.07.015 | - |
dc.contributor.localauthor | Yoon, Hyunsoo | - |
dc.contributor.nonIdAuthor | Kim, Yongchul | - |
dc.contributor.nonIdAuthor | Park, Ki-Woong | - |
dc.contributor.nonIdAuthor | Choi, Daeseon | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Review | - |
dc.subject.keywordAuthor | Deep Neural Network | - |
dc.subject.keywordAuthor | Evasion Attack | - |
dc.subject.keywordAuthor | Adversarial Example | - |
dc.subject.keywordAuthor | Covert Channel | - |
dc.subject.keywordAuthor | Machine Learning | - |
dc.subject.keywordPlus | DEEP NEURAL-NETWORKS | - |
dc.subject.keywordPlus | SECURITY | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.