%0 Conference Proceedings %T GanDef: A GAN Based Adversarial Training Defense for Neural Network Classifier %+ New Jersey Institute of Technology [Newark] (NJIT) %+ Qatar Computing Research Institute [Doha, Qatar] (QCRI) %A Liu, Guanxiong %A Khalil, Issa %A Khreishah, Abdallah %Z Part 1: Intrusion Detection %< avec comité de lecture %( IFIP Advances in Information and Communication Technology %B 34th IFIP International Conference on ICT Systems Security and Privacy Protection (SEC) %C Lisbon, Portugal %Y Gurpreet Dhillon %Y Fredrik Karlsson %Y Karin Hedström %Y André Zúquete %I Springer International Publishing %3 ICT Systems Security and Privacy Protection %V AICT-562 %P 19-32 %8 2019-06-25 %D 2019 %R 10.1007/978-3-030-22312-0_2 %K Neural network classifier %K Generative Adversarial Net %K Adversarial training defense %Z Computer Science [cs]Conference papers %X Machine learning models, especially neural network (NN) classifiers, are widely used in many applications including natural language processing, computer vision and cybersecurity. They provide high accuracy under the assumption of attack-free scenarios. However, this assumption has been defied by the introduction of adversarial examples – carefully perturbed samples of input that are usually misclassified. Many researchers have tried to develop a defense against adversarial examples; however, we are still far from achieving that goal. In this paper, we design a Generative Adversarial Net (GAN) based adversarial training defense, dubbed GanDef, which utilizes a competition game to regulate the feature selection during the training. We analytically show that GanDef can train a classifier so it can defend against adversarial examples. Through extensive evaluation on different white-box adversarial examples, the classifier trained by GanDef shows the same level of test accuracy as those trained by state-of-the-art adversarial training defenses. More importantly, GanDef-Comb, a variant of GanDef, could utilize the discriminator to achieve a dynamic trade-off between correctly classifying original and adversarial examples. As a result, it achieves the highest overall test accuracy when the ratio of adversarial examples exceeds 41.7%. %G English %Z TC 11 %2 https://inria.hal.science/hal-03744311/document %2 https://inria.hal.science/hal-03744311/file/485650_1_En_2_Chapter.pdf %L hal-03744311 %U https://inria.hal.science/hal-03744311 %~ IFIP %~ IFIP-AICT %~ IFIP-TC %~ IFIP-TC11 %~ IFIP-SEC %~ IFIP-AICT-562