Hidden Conditional Adversarial Attacks

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 80
  • Download : 0
Deep neural networks are vulnerable to maliciously crafted inputs called adversarial examples. Research on unprecedented adversarial attacks is significant since it can help strengthen the reliability of neural networks by alarming potential threats against them. However, since existing adversarial attacks disturb models unconditionally, the resulting adversarial examples increase their detectability through statistical observations or human inspection. To tackle this limitation, we propose hidden conditional adversarial attacks whose resultant adversarial examples disturb models only if the input images satisfy attackers’ pre-defined conditions. These hidden conditional adversarial examples have better stealthiness and controllability of their attack ability. Our experimental results on the CIFAR-10 and ImageNet datasets show their effectiveness and raise a serious concern about the vulnerability of CNNs against the novel attacks.
Publisher
IEEE
Issue Date
2022-10
Language
English
Citation

IEEE International Conference on Image Processing, ICIP 2022, pp.1306 - 1310

ISSN
1522-4880
DOI
10.1109/ICIP46576.2022.9898075
URI
http://hdl.handle.net/10203/300312
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0