Acoustic-decoy: Detection of adversarial examples through audio modification on speech recognition system

Cited 27 time in webofscience Cited 19 time in scopus
  • Hit : 581
  • Download : 303
DC FieldValueLanguage
dc.contributor.authorKwon, Hyunko
dc.contributor.authorYoon, Hyunsooko
dc.contributor.authorPark, Ki-Woongko
dc.date.accessioned2021-01-04T05:50:14Z-
dc.date.available2021-01-04T05:50:14Z-
dc.date.created2020-11-04-
dc.date.issued2020-12-
dc.identifier.citationNEUROCOMPUTING, v.417, pp.357 - 370-
dc.identifier.issn0925-2312-
dc.identifier.urihttp://hdl.handle.net/10203/279430-
dc.description.abstractDeep neural networks (DNNs) display good performance in the domains of recognition and prediction, such as on tasks of image recognition, speech recognition, video recognition, and pattern analysis. However, adversarial examples, created by inserting a small amount of noise into the original samples, can be a serious threat because they can cause misclassification by the DNN. Adversarial examples have been studied primarily in the context of images, but their effect in the audio context is now drawing considerable interest as well. For example, by adding a small distortion to an original audio sample, imperceptible to humans, an audio adversarial example can be created that humans hear as error-free but that causes misunderstanding by a machine. Therefore, it is necessary to create a method of defense for resisting audio adversarial examples. In this paper, we propose an acoustic-decoy method for detecting audio adversarial examples. Its key feature is that it adds well-formalized distortions using audio modification that are sufficient to change the classification result of an adversarial example but do not affect the classification result of an original sample. Experimental results show that the proposed scheme can detect adversarial examples by reducing the similarity rate for an adversarial example to 6.21%, 1.27%, and 0.66% using low-pass filtering (with 12 dB roll-off), 8-bit reduction, and audio silence removal techniques, respectively. It can detect an audio adversarial example with a success rate of 97% by performing a comparison with the initial audio sample.-
dc.languageEnglish-
dc.publisherELSEVIER-
dc.titleAcoustic-decoy: Detection of adversarial examples through audio modification on speech recognition system-
dc.typeArticle-
dc.identifier.wosid000590407200009-
dc.identifier.scopusid2-s2.0-85091765177-
dc.type.rimsART-
dc.citation.volume417-
dc.citation.beginningpage357-
dc.citation.endingpage370-
dc.citation.publicationnameNEUROCOMPUTING-
dc.identifier.doi10.1016/j.neucom.2020.07.101-
dc.contributor.localauthorYoon, Hyunsoo-
dc.contributor.nonIdAuthorKwon, Hyun-
dc.contributor.nonIdAuthorPark, Ki-Woong-
dc.description.isOpenAccessY-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorMachine learning-
dc.subject.keywordAuthorAudio modification-
dc.subject.keywordAuthorAudio adversarial example-
dc.subject.keywordAuthorDefense technology-
dc.subject.keywordAuthorDeep neural network (DNN)-
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 27 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0