Strengthening the transferability of adversarial examples using advanced looking ahead and self-cutmix발전된 선행 탐색과 자기 절취 및 혼합을 이용한 적대적 예제의 전이성 강화

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 34
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorSon, Sanghyeok-
dc.date.accessioned2023-06-26T19:33:33Z-
dc.date.available2023-06-26T19:33:33Z-
dc.date.issued2022-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1000327&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/309817-
dc.description학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2022.2,[iv, 20 p. :]-
dc.description.abstractDeep neural networks (DNNs) are vulnerable to adversarial examples generated by adding malicious noise imperceptible to a human. The adversarial examples successfully fool the models under the white-box setting, but the performance of attacks under the black-box setting degrades significantly, which is known as the low transferability problem. Various methods have been proposed to improve transferability, yet they are not effective against adversarial training and defense models. In this paper, we introduce two new methods termed Lookahead Iterative Fast Gradient Sign Method (LI-FGSM) and Self-Cutmix Method (SCM) to address the above issues. LI-FGSM updates adversarial perturbations with the accumulated gradient obtained by looking ahead. A previous gradient-based attack is used for looking ahead during N steps to explore the optimal direction at each iteration. It allows the optimization process to escape the suboptimal region and stabilize the update directions. SCM leverages the modified Cutmix, which copies a patch from the original image and pastes it back at random positions of the same image, to preserve the internal information. SCM makes it possible to generate more transferable adversarial examples while alleviating the overfitting to the surrogate model employed. Our two methods are easily incorporated with the previous iterative gradient-based attacks. Extensive experiments on ImageNet show that our approach acquires state-of-the-art attack success rates not only against normally trained models but also against adversarial training and defense models.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject적대적 예제▼a블랙 박스▼a전이성▼a비 최적점▼a과적합-
dc.subjectAdversarial examples▼aBlack-box▼aTransferability▼aSuboptimal-region▼aOverfitting-
dc.titleStrengthening the transferability of adversarial examples using advanced looking ahead and self-cutmix-
dc.title.alternative발전된 선행 탐색과 자기 절취 및 혼합을 이용한 적대적 예제의 전이성 강화-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전기및전자공학부,-
dc.contributor.alternativeauthor손상혁-
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0