%0 Conference Proceedings %T Pixel Based Adversarial Attacks on Convolutional Neural Network Models %+ Sri Sivasubramaniya Nadar College of Engineering (SSN College of Engineering) %+ Department of Computer Science and Engineering %A Srinivasan, Kavitha %A Jello Raveendran, Priyadarshini %A Suresh, Varun %A Anna Sundaram, Nithya, Rathna %Z Part 1: Machine Learning (ML), Deep Learning (DL), Internet of Things (IoT) %< avec comité de lecture %( IFIP Advances in Information and Communication Technology %B 4th International Conference on Computational Intelligence in Data Science (ICCIDS) %C Chennai, India %Y Vallidevi Krishnamurthy %Y Suresh Jaganathan %Y Kanchana Rajaram %Y Saraswathi Shunmuganathan %I Springer International Publishing %3 Computational Intelligence in Data Science %V AICT-611 %P 141-155 %8 2021-03-18 %D 2021 %R 10.1007/978-3-030-92600-7_14 %K Deep Neural Networks %K Adversarial attacks %K Convolutional Neural Network Models %K Gradient weighted Class Activation Mapping %K Edge detection %K Noise addition %K Saliency maps %Z Computer Science [cs]Conference papers %X Deep Neural Networks (DNN) has found their applications in the real time, for example, facial recognition for security in ATMs and self-driving cars. A major security threat to DNN is through adversarial attacks. An adversarial sample is an image that has been changed in such a way that it is imperceptible to human eye but causes the image to be misclassified by a Convolutional Neural Networks (CNN). The objective of this research work is to devise pixel based algorithms for adversarial attacks on images. For validating the algorithms, untargeted attack is performed on MNIST and CIFAR-10 dataset using techniques such as edge detection, Gradient weighted Class Activation Mapping (GRAD-CAM) and noise addition whereas targeted attack is performed on MNIST dataset using Saliency maps. These adversarial images thus generated are then passed to a CNN model and the misclassification results are analyzed. From the analysis, it has been inferred that it is easier to fool CNNs using untargeted attacks than the targeted attacks. Also, grayscale images (MNIST) are preferred to generate robust adversarial examples compared to colored images (CIFAR-10). %G English %Z TC 12 %2 https://inria.hal.science/hal-03772929/document %2 https://inria.hal.science/hal-03772929/file/512058_1_En_14_Chapter.pdf %L hal-03772929 %U https://inria.hal.science/hal-03772929 %~ IFIP-LNCS %~ IFIP %~ IFIP-AICT %~ IFIP-TC %~ IFIP-TC12 %~ IFIP-ICCIDS %~ IFIP-AICT-611