Double Targeted Universal Adversarial Perturbations

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 36
  • Download : 0
Despite their impressive performance, deep neural networks (DNNs) are widely known to be vulnerable to adversarial attacks, which makes it challenging for them to be deployed in security-sensitive applications, such as autonomous driving. Image-dependent perturbations can fool a network for one specific image, while universal adversarial perturbations are capable of fooling a network for samples from all classes without selection. We introduce a double targeted universal adversarial perturbations (DT-UAPs) to bridge the gap between the instance-discriminative image-dependent perturbations and the generic universal perturbations. This universal perturbation attacks one targeted source class to sink class, while having a limited adversarial effect on other non-targeted source classes, for avoiding raising suspicions. Targeting the source and sink class simultaneously, we term it double targeted attack (DTA). This provides an attacker with the freedom to perform precise attacks on a DNN model while raising little suspicion. We show the effectiveness of the proposed DTA algorithm on a wide range of datasets and also demonstrate its potential as a physical attack (Code: https://github.com/phibenz/double-targeted-uap.pytorch ). © 2021, Springer Nature Switzerland AG.
Publisher
Springer Science and Business Media Deutschland GmbH
Issue Date
2021-12
Language
English
Citation

15th Asian Conference on Computer Vision, ACCV 2020, pp.284 - 300

ISSN
0302-9743
DOI
10.1007/978-3-030-69538-5_18
URI
http://hdl.handle.net/10203/288759
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0