Semi-Implicit Hybrid Gradient Methods with Application to Adversarial Robustness

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 82
  • Download : 0
Adversarial examples, crafted by adding imperceptible perturbations to natural inputs, can easily fool deep neural networks (DNNs). One of the most successful methods for training adversarially robust DNNs is solving a nonconvex-nonconcave minimax problem with an adversarial training (AT) algorithm. However, among the many AT algorithms, only Dynamic AT (DAT) and You Only Propagate Once (YOPO) is guaranteed to converge to a stationary point with rate O(1/K-1/2). In this work, we generalize the stochastic primal-dual hybrid gradient algorithm to develop semi-implicit hybrid gradient methods (SI-HGs) for finding stationary points of nonconvex-nonconcave minimax problems. SI-HGs have the convergence rate O(1/K), which improves upon the rate O(1/K-1/2) of DAT and YOPO. We devise a practical variant of SI-HGs, and show that it outperforms other AT algorithms in terms of convergence speed and robustness.
Publisher
JMLR-JOURNAL MACHINE LEARNING RESEARCH
Issue Date
2022-03
Language
English
Citation

International Conference on Artificial Intelligence and Statistics

ISSN
2640-3498
URI
http://hdl.handle.net/10203/298263
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0