Rethinking training schedules for verifiably robust neural networks검증 가능하게 강건한 뉴럴 네트워크를 위한 훈련 과정 재고

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 50
  • Download : 0
Adversarial examples which are imperceptibly crafted by adversarial attacks can fool neural networks. Defense methods for it have been proposed, but new and stronger attacks can threaten existing defenses. This possibility highlights the importance of certified defense methods that train deep neural networks with verifiably robust guarantees. Interval bound propagation (IBP)-based methods have been demonstrated to be most effective for certified defense, However, we observe that these methods are suffered from Low Epsilon Overfitting (LEO), a problem arising from their training schedule which increases the input perturbation bound ($\epsilon$). In this paper, we show that LEO can disturb the learning of a simple linear classifier in higher epsilon $(\epsilon)$ and investigate the evidence of LEO by experiments. Based on these observations, we propose a new training strategy, BatchMix, which mixes various $\epsilon$ in a mini-batch to alleviate LEO. Experimental results on MNIST and CIFAR-10 datasets show that BatchMix can improve the performance of IBP-based methods.
Advisors
Kim, Changickresearcher김창익researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2022
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2022.2,[iii, 37 p. :]

URI
http://hdl.handle.net/10203/309870
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=997260&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0