We propose a benchmark suite for neural network verification for systems (NN4Sys) in this repository. This suite includes verification benchmark for learned index, learned cardinality, learned Internet congestion control, learned adaptive bitrate, and learned distributed system scheduler, which are five tasks that apply neural networks to solve traditional tasks for systems.
Steps 1 and 2 reproduce the generation of specifications. All results from these steps are already included in the repository, so you can skip them and go directly to the Verification section if you only want to run verification with this benchmark.
Pre-trained models are available, so you may skip this step if you wish to use them directly.
However, if you prefer to train the models by yourself, the training code is located in the /Models
directory.
We provide scripts for training the following models:
- Learned Internet Congestion Control
- Learned Adaptive Bitrate
- Learned Distributed System Scheduler
You can find the training and testing instructions and pre-trained models for each model in the table below:
Model | Training and Testing Instructions | Pre-trained Model |
---|---|---|
Learned Internet Congestion Control | Instruction | Model |
Learned Adaptive Bitrate | Instruction | Model |
Learned Distributed System Scheduler | Instruction | Model |
Database Learned Index | - | ONNX Model |
Learned Bloom Filter | - | ONNX Model |
Learned Cardinalities | - | ONNX Model |
This step generates specifications. All results from this step are already included in the repository, so you can skip this step and use them directly.
conda env create -f environment.yml
Then, activate the environment
conda activate myenv
If you have an issue building this environment, please run
conda create -n myenv python=3.9
conda activate myenv
Then, install the missing package.
The next step is to generate the "instance pool" for each model, which includes thousands of instances used as resources when generating the benchmark instances.
To generate fixed input files for the models, run:
cd Models
python gen_upper.py --model {"pensieve", "decima", "lindex", "cardinality", "bloom_filter", "aurora", "all"}
cd ..
Here is the model and its name in the parameters.
Model | Used Name in Parameter |
---|---|
Learned Internet Congestion Control | aurora |
Learned Adaptive Bitrate | pensieve |
Learned Distributed System Scheduler | decima |
Database Learned Index | lindex |
Learned Bloom Filter | bloom_filter |
Learned Cardinalities | cardinality |
All Models | all |
Note: currently, we only provide the script for
- Learned Internet Congestion Control
- Learned Adaptive Bitrate
- Learned Distributed System Scheduler
You can skip this step as these files have already been provided. Refer to this table for more information.
We use ONNX format models for verification. To create ONNX models from trained models, run:
cd Models
python export.py --model {"pensieve", "decima", "lindex", "cardinality", "bloom_filter", "aurora", "all"}
cd ..
You can skip this step as ONNX models are already provided. Refer to this table for more information.
To create the specifications, run the following commands:
cd Benchmarks
python generate_properties.py --seed 2024
cd ..
The default random seed is 2024.
By default, this step generates 10 instances for each specification separately. If you want to generate more instances, you can set the --size and --model parameters. The model can be chosen from model and parameter table
cd Benchmarks
python generate_properties.py --model pensieve --size 20
cd ..
When you see the message
Successfully generated required instances!
it means the instances have been generated successfully.
The generated files are located in Benchmarks/vnnlib
and Benchmarks/marabou_txt
.
You can skip this step as instances are provided, you can refer to the table below..
Model | Training and Testing Instructions | ONNX Model | Instance VNNLIB | Instance txt |
---|---|---|---|---|
Learned Internet Congestion Control | Instance Pool | ONNX Model | VNNLIB | TXT |
Learned Adaptive Bitrate | Instance Pool | ONNX Model | VNNLIB | TXT |
Learned Distributed System Scheduler | Instance Pool | ONNX Model | VNNLIB | TXT |
Database Learned Index | Instance Pool | ONNX Model | VNNLIB | TXT |
Learned Bloom Filter | Instance Pool | ONNX Model | VNNLIB | TXT |
Learned Cardinalities | Instance Pool | ONNX Model | VNNLIB | TXT |
Currently, we provide scripts for Alpha-Beta-CROWN and Marabou. We are actively developing this project, and more verifiers will be supported in the future.
Install Alpha-Beta-CROWN by following the instructions at https://github.com/Verified-Intelligence/alpha-beta-CROWN.
Then run:
cd Verification
python abcrown_run.py --path {path to abcrown.py} --model {pensieve, decima, lindex, cardinality, bloom_filter, aurora, all}
cd ..
Hint: the abcrown path is usually like /home/User/Verifier_Development/complete_verifier/abcrown.py
Once the process is complete, you should see the following messages:
[Done] Finished running abcrown verification for {model}
Install Marabou by following the instructions at https://github.com/NeuralNetworkVerification/Marabou/tree/master.
Then run:
cd Verification
python marabou_run.py --path {runMarabou.py path} --model {"pensieve", "decima", "lindex", "bloom_filter", "aurora", "all"}
cd ..
Once the process is complete, you should see the following messages:
[Done] Finished running marabou verification for {model}
To visualize the results, run:
cd Verification/figures
python create_json.py
python draw.py
cd ../..
Once the process is complete, you should see the following messages:
[Done] Creation is finished. Result stored in eval_results.json
[Done] Successfully generated verification_runtime.pdf
[Done] Successfully generated verification_runtime_and_model_size.pdf
The generated PDF files will be located in the /Verification/figures/
directory.
The result would look like the following:
Ensure you have the required environment set up. Refer to Section 2.1 for detailed instructions. Run the following commands to install additional required packages for customizing benchmarks:
cd Customize
pip install -r requirements.txt
cd ..
Train your model using your preferred framework and methodology. Once trained, save the following files to the specified locations:
- Model Checkpoint: Save the
.pth
file to/Customize/model.pt
- Model Architecture: Save the model definition script (e.g.,
.py
file) to/Customize/model.py
These files will be used to load and evaluate your model.
We currently support a simple benchmarking setup. To get started:
-
Edit the Configuration File: Fill in the details in the provided configuration file. This file allows you to specify instance parameters and paths for benchmarking.
-
Run the Benchmark: Once configured, start the benchmark by running:
python customize.py
Note: At present, only models with a single output are supported. Models requiring multiple outputs, such as Pensieve, are not yet compatible with this setup.
The benchmarking results will display in the terminal upon completion.