Consistency Regularization for Adversarial Robustness

Cited 7 time in webofscience Cited 0 time in scopus
  • Hit : 441
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorTack, Jihoonko
dc.contributor.authorYu, Sihyunko
dc.contributor.authorJeong, Jongheonko
dc.contributor.authorKim, Minseonko
dc.contributor.authorHwang, Sung Juko
dc.contributor.authorShin, Jinwooko
dc.date.accessioned2022-12-05T08:03:38Z-
dc.date.available2022-12-05T08:03:38Z-
dc.date.created2022-12-05-
dc.date.created2022-12-05-
dc.date.issued2022-02-24-
dc.identifier.citationThe Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI-22), pp.8414 - 8422-
dc.identifier.issn2159-5399-
dc.identifier.urihttp://hdl.handle.net/10203/301708-
dc.description.abstractAdversarial training (AT) is currently one of the most successful methods to obtain the adversarial robustness of deep neural networks. However, the phenomenon of robust overfitting, i.e., the robustness starts to decrease significantly during AT, has been problematic, not only making practitioners consider a bag of tricks for a successful training, e.g., early stopping, but also incurring a significant generalization gap in the robustness. In this paper, we propose an effective regularization technique that prevents robust overfitting by optimizing an auxiliary ‘consistency’ regularization loss during AT. Specifically, we discover that data augmentation is a quite effective tool to mitigate the overfitting in AT, and develop a regularization that forces the predictive distributions after attacking from two different augmentations of the same instance to be similar with each other. Our experimental results demonstrate that such a simple regularization technique brings significant improvements in the test robust accuracy of a wide range of AT methods. More remarkably, we also show that our method could significantly help the model to generalize its robustness against unseen adversaries, e.g., other types or larger perturbations compared to those used during training. Code is available at https://github.com/alinlab/consistency-adversarial.-
dc.languageEnglish-
dc.publisherAssociation for the Advancement of Artificial Intelligence-
dc.titleConsistency Regularization for Adversarial Robustness-
dc.typeConference-
dc.identifier.wosid000893639101048-
dc.identifier.scopusid2-s2.0-85127171613-
dc.type.rimsCONF-
dc.citation.beginningpage8414-
dc.citation.endingpage8422-
dc.citation.publicationnameThe Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI-22)-
dc.identifier.conferencecountryCN-
dc.identifier.conferencelocationVirtual-
dc.contributor.localauthorHwang, Sung Ju-
dc.contributor.localauthorShin, Jinwoo-
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 7 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0