Feature Separation and Recalibration for Adversarial Robustness

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 43
  • Download : 0
Deep neural networks are susceptible to adversarial at-tacks due to the accumulation of perturbations in the fea-ture level, and numerous works have boosted model robust-ness by deactivating the non-robust feature activations thatcause model mispredictions. However, we claim that thesemalicious activations still contain discriminative cues andthat with recalibration, they can capture additional use-ful information for correct model predictions. To this end,we propose a novel, easy-to-plugin approach named Fea-ture Separation and Recalibration (FSR) that recalibratesthe malicious, non-robust activations for more robust fea-ture maps through Separation and Recalibration. The Sep-aration part disentangles the input feature map into therobust feature with activations that help the model makecorrect predictions and the non-robust feature with activa-tions that are responsible for model mispredictions uponadversarial attack. The Recalibration part then adjuststhe non-robust activations to restore the potentially usefulcues for model predictions. Extensive experiments verifythe superiority of FSR compared to traditional deactivationtechniques and demonstrate that it improves the robustnessof existing adversarial training methods by up to 8.57%with small computational overhead. Codes are availableat https://github.com/wkim97/FSR.
Publisher
Computer Vision Foundation
Issue Date
2023-06-20
Language
English
Citation

The IEEE / CVF Computer Vision and Pattern Recognition Conference 2023

URI
http://hdl.handle.net/10203/314618
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0