DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Hakmin | ko |
dc.contributor.author | Ro, Yong Man | ko |
dc.date.accessioned | 2023-06-27T05:00:20Z | - |
dc.date.available | 2023-06-27T05:00:20Z | - |
dc.date.created | 2023-06-26 | - |
dc.date.created | 2023-06-26 | - |
dc.date.created | 2023-06-26 | - |
dc.date.created | 2023-06-26 | - |
dc.date.created | 2023-06-26 | - |
dc.date.created | 2023-06-26 | - |
dc.date.issued | 2023-08 | - |
dc.identifier.citation | IMAGE AND VISION COMPUTING, v.136 | - |
dc.identifier.issn | 0262-8856 | - |
dc.identifier.uri | http://hdl.handle.net/10203/310051 | - |
dc.description.abstract | Adversarial training (AT), which is known as a robust training method for defending against adversarial examples, usually loses the performance of models for clean examples due to the feature distribution discrepancy between clean and adversarial. In this paper, we propose a novel Adversarial Anchor-guided Feature Refinement (AAFR) defense method aimed at reducing the discrepancy and delivering reliable performances for both clean and adversarial examples. We devise adversarial anchor that detects whether the feature comes from clean or adversarial example. Then, we use adversarial anchor to refine the feature to reduce the discrepancy. As a result, the proposed method substantially achieves adversarial robustness while preserving the performance for clean examples. The effectiveness of the proposed method is verified with comprehensive experiments on CIFAR-10, CIFAR-100, and Tiny ImageNet datasets. | - |
dc.language | English | - |
dc.publisher | ELSEVIER | - |
dc.title | Adversarial anchor-guided feature refinement for adversarial defense | - |
dc.type | Article | - |
dc.identifier.wosid | 001025759700001 | - |
dc.identifier.scopusid | 2-s2.0-85162097744 | - |
dc.type.rims | ART | - |
dc.citation.volume | 136 | - |
dc.citation.publicationname | IMAGE AND VISION COMPUTING | - |
dc.identifier.doi | 10.1016/j.imavis.2023.104722 | - |
dc.contributor.localauthor | Ro, Yong Man | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Adversarial anchor | - |
dc.subject.keywordAuthor | Covariate shift | - |
dc.subject.keywordAuthor | Feature refinement | - |
dc.subject.keywordAuthor | Adversarial example | - |
dc.subject.keywordAuthor | Adversarial robustness | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.