Showing results 6 to 10 of 10
One-Pixel Adversarial Example that is Safe for Friendly Deep Neural Networks Kwon, Hyun; Kim, Yongchul; Yoon, Hyunsoo; Choi, Daeseon, 19th World International Conference on Information Security and Applications (WISA), pp.42 - 54, Korea Institute of Information Security & Cryptology, 2018-08-23 |
POSTER: Detecting Audio Adversarial Example through Audio Modification Kwon, Hyun; Yoon, Hyunsoo; Park, Ki-Woong, The 26th ACM Conference on Computer and Communications Security (ACM CCS 2019), pp.2521 - 2523, Audit and Control (SIGSAC) of the Association for Computing Machinery (ACM), 2019-11-12 |
Priority Adversarial Example in Evasion Attack on Multiple Deep Neural Networks Kwon, Hyun; Yoon, Hyunsoo; Choi, Daeseon, The First International Conference on Artificial Intelligence in information and communication (ICAIIC 2019), pp.399 - 404, The Korean Institute of Communications and Information Sciences, 2019-02-13 |
Selective Poisoning Attack on Deep Neural Network to Induce Fine-Grained Recognition Error Kwon, Hyun; Yoon, Hyunsoo; Park, Ki-Woong, IEEE International Conference on Artificial Intelligence and Knowledge Engineering (IEEE AIKE), pp.136 - 139, IEEE Computer Society Press, 2019-06-04 |
TargetNet Backdoor: Attack on Deep Neural Network with Use of Different Triggers Kwon, Hyun; Roh, Jungmin; Yoon, Hyunsoo; Park, Ki-Woong, 5th International Conference on Intelligent Information Technology, ICIIT 2020, pp.140 - 145, Association for Computing Machinery, 2020-02 |
Discover