Research on adversarial attacks in multiple deep neural networks다중 딥러닝 모델에서의 적대적 공격에 관한 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 256
  • Download : 0
Deep neural networks (DNNs) are widely used for image recognition, speech recognition, intrusion tolerance, natural language processing, and game-playing. The security and safety of neural networks and machine learning receive considerable attention from the security research community. Adversarial examples are presented in image classification; in an evasion attack, images that are transformed slightly can be incorrectly classified by a machine learning classifier, even when the changes are so small that a human cannot easily recognize them. Such an attack can cause a self-driving car to perform an unwanted action, provided a slight change is made to a road sign. Countermeasures against these attacks have been proposed, and subsequently, more advanced attacks were developed to defeat the countermeasures. In this dissertation, we study the adversarial example attack according to recognition purpose in multiple deep neural networks. For example, an adversarial example might be useful, such as when deceiving an enemy classifier on the battlefield. In such a scenario, it is necessary that a friendly classifier not be deceived. In this dissertation, we propose a friend-safe adversarial example, meaning that the friendly machine can classify the adversarial example correctly. To produce such examples, a transformation is carried out to minimize incorrect classifications by the friend and correct classifications by the adversary. We suggest two configurations for the scheme: targeted and untargeted class attacks. We performed experiments with this scheme using the MNIST and CIFAR10 datasets. Our proposed method shows a 100% attack success rate and 100% friend accuracy with only a small distortion: 2.18 and 1.54 for the two respective MNIST configurations, and 49.02 and 27.61 for the two respective CIFAR10 configurations. In addition, my research expanded into selective attack in the field of speech, attack on multiple models, random untargeted attacks, attack on specific areas, selective untargeted attack, and defenses of the adversarial example. My research also extended to CAPTCHA system, face recognition system, backdoor attack, and poisoning attack.
Advisors
Yoon, Hyunsooresearcher윤현수researcher
Description
한국과학기술원 :전산학부,
Publisher
한국과학기술원
Issue Date
2020
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전산학부, 2020.2,[xii, 153 p. :]

Keywords

Deep Neural Network (DNN)▼aEvasion Attack▼aAdversarial Example▼aMachine Learning; 딥뉴럴네트워크▼a회피공격▼a적대적 공격 샘플▼a기계 학습▼a음성인식시스템

URI
http://hdl.handle.net/10203/283506
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=901603&flag=dissertation
Appears in Collection
CS-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0