DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kwon, Hyun | ko |
dc.contributor.author | Yoon, Hyunsoo | ko |
dc.contributor.author | Park, Ki-Woong | ko |
dc.date.accessioned | 2020-05-26T02:20:17Z | - |
dc.date.available | 2020-05-26T02:20:17Z | - |
dc.date.created | 2020-04-04 | - |
dc.date.issued | 2020-04 | - |
dc.identifier.citation | IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, v.E103D, no.4 | - |
dc.identifier.issn | 1745-1361 | - |
dc.identifier.uri | http://hdl.handle.net/10203/274290 | - |
dc.description.abstract | We propose a multi-targeted backdoor that misleads different models to different classes. The method trains multiple models with data that include specific triggers that will be misclassified by different models into different classes. For example, an attacker can use a single multi-targeted backdoor sample to make model A recognize it as a stop sign, model B as a left-turn sign, model C as a right-turn sign, and model D as a U-turn sign. We used MNIST and Fashion-MNIST as experimental datasets and Tensorflow as a machine learning library. Experimental results show that the proposed method with a trigger can cause misclassification as different classes by different models with a 100% attack success rate on MNIST and Fashion-MNIST while maintaining the 97.18% and 91.1% accuracy, respectively, on data without a trigger. | - |
dc.language | English | - |
dc.publisher | IEICE-INST ELECTRONICS INFORMATION COMMUNICATIONS ENG | - |
dc.title | Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks | - |
dc.type | Article | - |
dc.identifier.wosid | 000530667500018 | - |
dc.identifier.scopusid | 2-s2.0-85082739677 | - |
dc.type.rims | ART | - |
dc.citation.volume | E103D | - |
dc.citation.issue | 4 | - |
dc.citation.publicationname | IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS | - |
dc.identifier.doi | 10.1587/transinf.2019edl8170 | - |
dc.contributor.localauthor | Yoon, Hyunsoo | - |
dc.contributor.nonIdAuthor | Park, Ki-Woong | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | machine learning | - |
dc.subject.keywordAuthor | deep neural network | - |
dc.subject.keywordAuthor | backdoor attack | - |
dc.subject.keywordAuthor | poisoning attack | - |
dc.subject.keywordAuthor | adversarial example | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.