Skip to content

Pixel attack technique for adversarial machine learning, where a minimal perturbation of individual pixels was strategically introduced to manipulate the predictions of deep learning models

Notifications You must be signed in to change notification settings

bibek36/One-Pixel-Attack-for-Fooling-Deep-Neural-Networks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 

Repository files navigation

Abstract

Recent studies have uncovered the susceptibility of Deep Neural Networks (DNNs) to manipulation through small perturbations applied to input vectors. This project focuses on analyzing a specific attack scenario, where only a single pixel can be modified. To address this, we propose a novel approach utilizing differential evolution (DE) to generate one-pixel adversarial perturbations. Our method operates as a black-box attack, requiring minimal adversarial information, and demonstrates the ability to deceive a wider range of network types due to the inherent characteristics of DE. Our results reveal that, 67.97% of the natural images in the Kaggle CIFAR-10 test dataset can be perturbed to target at least one different class by modifying just one pixel, achieving an average confidence of 74.03%. This attack sheds light on the vulnerability of current DNNs to low-dimensional attacks, presenting a distinctive perspective in the field of adversarial machine learning within an extremely constrained scenario.

Read Project Report for detailed explaination

Project Report.pdf

About

Pixel attack technique for adversarial machine learning, where a minimal perturbation of individual pixels was strategically introduced to manipulate the predictions of deep learning models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published