Online Evasion Attacks on Recurrent Models: The Power of Hallucinating the Future

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 41
  • Download : 0
Recurrent models are frequently being used in online tasks such as autonomous driving, and a comprehensive study of their vulnerability is called for. Existing research is limited in generality only addressing application-specific vulnerability or making implausible assumptions such as the knowledge of future input. In this paper, we present a general attack framework for online tasks incorporating the unique constraints of the online setting different from offline tasks. Our framework is versatile in that it covers time-varying adversarial objectives and various optimization constraints, allowing for a comprehensive study of robustness. Using the framework, we also present a novel white-box attack called Predictive Attack that 'hallucinates' the future. The attack achieves 98 percent of the performance of the ideal but infeasible clairvoyant attack on average. We validate the effectiveness of the proposed framework and attacks through various experiments.
Publisher
IJCAI
Issue Date
2022-07-29
Language
English
Citation

31st International Joint Conference on Artificial Intelligence, IJCAI 2022, pp.3121 - 3127

ISSN
1045-0823
URI
http://hdl.handle.net/10203/299494
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0