This is a work in progress repository. Currently worked on by a course at the Carl von Ossietzky University Oldenburg.
Nowadays, neural networks are used in safety-critical areas, among others. As a result, there is a need to formally prove some of the network's behaviours (especially under malicious inputs). One of those behaviours is the so-called Robustness of the network. Robustness means that small perturbations to the inputs should not lead to changes in the output of the neural network.
The illustration shows, for example, image recognition for an animal. On the left side, the neural network recognises a panda with a probability of 57.7%. By adding noise, a gibbon is recognised in the following with a probability of 99.3%. In this example, this wrong decision is probably not safety-critical, but other use cases (for example in the field of autonomous driving) are conceivable where such a wrong decision could have serious consequences.
If you are new to SMT based verification of neural network we recommend you starting with the Z3 Tutorial because it is the most basic one. If you are additionally interested in the theory behind this we recommend this book on neural network verification.
Tool | Colab Link | Tool Codebase | Progress |
---|---|---|---|
Z3 | https://github.com/Z3Prover/z3 | ✔️ DONE | |
Marabou | https://github.com/NeuralNetworkVerification/Marabou | ✔️ DONE | |
Planet | https://github.com/progirep/planet | ✔️ DONE |
Tool | Colab Link | Tool Codebase | Progress |
---|---|---|---|
ERAN | https://github.com/eth-sri/eran | ✔️ DONE |
Tool | Colab Link | Tool Codebase | Progress |
---|---|---|---|
α,β-CROWN | https://github.com/huanzhang12/alpha-beta-CROWN | ✔️ DONE |