We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Hi, thanks for the great work!
Do you plan to offer the possibility to define a mask for adversarial attacks (PGD, FGSM) in the future?
I'm thinking about a mask that defines to which part of the input the adversarial perturbations should be applied.
This would make it possible to perturb only certain parts of the input, while the rest of the input would remain unchanged.
Thus, adversarial examples could be generated much more flexibly.
Thanks a lot!
The text was updated successfully, but these errors were encountered:
a1007a8
Successfully merging a pull request may close this issue.
Hi, thanks for the great work!
Do you plan to offer the possibility to define a mask for adversarial attacks (PGD, FGSM) in the future?
I'm thinking about a mask that defines to which part of the input the adversarial perturbations should be applied.
This would make it possible to perturb only certain parts of the input, while the rest of the input would remain unchanged.
Thus, adversarial examples could be generated much more flexibly.
Thanks a lot!
The text was updated successfully, but these errors were encountered: