Backdoor Attacks in Neural Networks – A Systematic Evaluation on Multiple Traffic Sign Datasets - Machine Learning and Knowledge Extraction Accéder directement au contenu
Communication Dans Un Congrès Année : 2019

Backdoor Attacks in Neural Networks – A Systematic Evaluation on Multiple Traffic Sign Datasets

Huma Rehman
  • Fonction : Auteur
  • PersonId : 1066987
Andreas Ekelhart
  • Fonction : Auteur
  • PersonId : 1066990
Rudolf Mayer
  • Fonction : Auteur
  • PersonId : 1066993

Résumé

Machine learning, and deep learning in particular, has seen tremendous advances and surpassed human-level performance on a number of tasks. Currently, machine learning is increasingly integrated in many applications and thereby, becomes part of everyday life, and automates decisions based on predictions. In certain domains, such as medical diagnosis, security, autonomous driving, and financial trading, wrong predictions can have a significant influence on individuals and groups. While advances in prediction accuracy have been impressive, machine learning systems still can make rather unexpected mistakes on relatively easy examples, and the robustness of algorithms has become a reason for concern before deploying such systems in real-world applications. Recent research has shown that especially deep neural networks are susceptible to adversarial attacks that can trigger such wrong predictions. For image analysis tasks, these attacks are in the form of small perturbations that remain (almost) imperceptible to human vision. Such attacks can cause a neural network classifier to completely change its prediction about an image, with the model even reporting a high confidence about the wrong prediction. Of particular interest for an attacker are so-called backdoor attacks, where a specific key is embedded into a data sample, to trigger a pre-defined class prediction. In this paper, we systematically evaluate the effectiveness of poisoning (backdoor) attacks on a number of benchmark datasets from the domain of autonomous driving.
Fichier principal
Vignette du fichier
485369_1_En_18_Chapter.pdf (1.06 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02520034 , version 1 (26-03-2020)

Licence

Paternité

Identifiants

Citer

Huma Rehman, Andreas Ekelhart, Rudolf Mayer. Backdoor Attacks in Neural Networks – A Systematic Evaluation on Multiple Traffic Sign Datasets. 3rd International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE), Aug 2019, Canterbury, United Kingdom. pp.285-300, ⟨10.1007/978-3-030-29726-8_18⟩. ⟨hal-02520034⟩
58 Consultations
170 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More