Denoise autoencoder and tensor decomposition for the robustness against adversarial attacks잡음제거 오토인코더와 텐서 분해를 통해 적대적 공격에 대응한 방어 모델에 관한 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 249
  • Download : 0
In this paper, we focus on the defense model in response to adversarial attacks on the deep learning model in image classification and semantic segmentation task. Currently, deep learning is used in many fields and shows excellent performance in computer vision tasks such as image classification and semantic segmentation. However, it has been found that the deep learning model is highly vulnerable to small perturbation designed to attack the model against what it calls an adversarial attack. In this paper, we propose a preprocessing method against adversarial attacks and neutralize the effect of the adversarial attacks. More specifically, in image classification, we propose a preprocessing method using the tensor decomposition method and propose a denoise autoencoder in the semantic segmentation model. The model can be defended without modification, and our method shows that a relatively simple method can defend adversarial attacks.
Advisors
Kim, Daeyoungresearcher김대영researcher
Description
한국과학기술원 :전산학부,
Publisher
한국과학기술원
Issue Date
2020
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전산학부, 2020.8,[iv, 31 p. :]

Keywords

Computer Vision▼aAdversarial attack; 컴퓨터 비전▼a적대적 공격

URI
http://hdl.handle.net/10203/285002
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=925163&flag=dissertation
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0