(A) study on reliable assessment of real-world adversarial robustness of AI models인공지능 모델의 실세계 적대적 강인성에 대한 신뢰할 수 있는 평가에 관한 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 5
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor김창익-
dc.contributor.authorByun, Junyoung-
dc.contributor.author변준영-
dc.date.accessioned2024-07-26T19:30:56Z-
dc.date.available2024-07-26T19:30:56Z-
dc.date.issued2023-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1047265&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/320965-
dc.description학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2023.8,[vii, 75 p. :]-
dc.description.abstractand (3) the Deformable Patch Projection technique, which facilitates physical-world adversarial attacks by modeling each adversarial patch as a deformable mesh and projecting it onto 3D objects using physics simulation. Comprehensive experimental results show that the proposed techniques provide a more reliable assessment of the adversarial robustness of AI models under realistic attack scenarios, which can contribute to the development of more secure and robust AI systems in the future.-
dc.description.abstract(2) the Clean Feature Mixup technique, which further improves the transferability by introducing competition into the optimization process of adversarial examples-
dc.description.abstractAlthough remarkable AI models have been developed for various computer vision tasks, they still remain vulnerable to adversarial examples, which are subtly modified inputs that lead to incorrect predictions. This vulnerability raises concerns about the security, reliability, and robustness of AI models, especially in real-world applications, and it is therefore important to accurately assess their adversarial robustness in advance. In this regard, this dissertation proposes three novel techniques to improve the attack performance of adversarial examples against black-box target models within realistic adversarial attack scenarios: (1) the Object-based Diverse Input technique, which improves the transferability of adversarial examples by rendering images on randomly sampled 3D objects-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject블랙박스 적대적 공격▼a실세계 공격▼a인공지능 취약점▼a전이 기반 공격▼a전이성▼a물리 시뮬레이션▼a객체 기반 다양한 입력▼a깨끗한 특징 혼합▼a변형 가능한 패치 투사-
dc.subjectBlack-box adversarial attack▼aReal-world attack▼aAI vulnerabilities▼aTransfer-based attack▼aTransferability▼aPhysics simulation▼aObject-based diverse input▼aClean feature mixup▼aDeformable patch projection-
dc.title(A) study on reliable assessment of real-world adversarial robustness of AI models-
dc.title.alternative인공지능 모델의 실세계 적대적 강인성에 대한 신뢰할 수 있는 평가에 관한 연구-
dc.typeThesis(Ph.D)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전기및전자공학부,-
dc.contributor.alternativeauthorKim, Changick-
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0