Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Mask for Adversarial Attacks #941

Closed
reheinrich opened this issue May 12, 2022 · 0 comments
Closed

Mask for Adversarial Attacks #941

reheinrich opened this issue May 12, 2022 · 0 comments

Comments

@reheinrich
Copy link

Hi, thanks for the great work!

Do you plan to offer the possibility to define a mask for adversarial attacks (PGD, FGSM) in the future?

I'm thinking about a mask that defines to which part of the input the adversarial perturbations should be applied.

This would make it possible to perturb only certain parts of the input, while the rest of the input would remain unchanged.

Thus, adversarial examples could be generated much more flexibly.

Thanks a lot!

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant