An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.
- Adversarial machine learning on Wikipedia
Title | Description, Information |
---|---|
TextAttack 🐙 | TextAttack is a Python framework for adversarial attacks, data augmentation, and model training in NLP. |
TextFlint | Unified Multilingual Robustness Evaluation Toolkit for Natural Language Processing TextFlint is a multilingual robustness evaluation platform for natural language processing, which unifies text transformation, sub-population, adversarial attack,and their combinations to provide a comprehensive robustness analysis. So far, TextFlint supports 13 NLP tasks. |