Skip to content
@fuzz-evaluator

fuzz-evaluator

Reproducible Fuzzer Evaluations

This organization accompanies our paper "SoK: Prudent Evaluation Practices for Fuzzing" accepted to the IEEE Security & Privacy Symposium (S&P) 2024. You can find the preprint here.

Most importantly, we would like to initiate a discussion on how future fuzzing papers can ensure their evaluations are robust and reproducible. We have collected all our lessons learned as guidelines at: https://github.com/fuzz-evaluator/guidelines

Contributions in form of discussions or changes are welcome! Our goal is to keep this repository up-to-date to provide a sensible and helpful set of guidelines for future research.

Quick Overview

For our paper, we have read 150 fuzzing papers published at prestigious conferences and studied how they conduct their evaluations. We find that there is much room for improvement, for example, in terms of a statistical evaluation. Please check out the paper (or preprint) to learn more about the details of our results. Beyond this literature survey, we further made an effort to reproduce the artifacts of eight papers. For each of them, you can find our reproduction artifact as part of this organization's repositories (in alphabetical order):

We emphasize that this is not intended to point fingers but about identifying potential pitfalls and making sure future fuzzing evaluations can avoid them. We all make mistakes -- the best thing we can do is to learn from them!

Finally, based on our findings from the literature survey and artifact evaluation, we provide revised recommendations (again, your input is welcome!). Regarding the statistical analysis, you can find our scripts here.

Citation

To cite our work, feel free to use the following bibtex entry:

@inproceedings{schloegel2024fuzzingsok,
  title = {SoK: Prudent Evaluation Practices for Fuzzing},
  author = {Moritz Schloegel and Nils Bars and Nico Schiller and Lukas Bernhard and Tobias Scharnowski and Addison Crump and Arash Ale-Ebrahim and Nicolai Bissantz and Marius Muench and Thorsten Holz},
  booktitle = {IEEE Symposium on Security and Privacy (SP)},
  year = {2024},
  doi = {10.1109/SP54263.2024.00137},
}  

Pinned Loading

  1. guidelines guidelines Public

    57 1

Repositories

Showing 10 of 23 repositories
  • fuzz-evaluator/FOX-fuzzopt-eval-upstream’s past year of commit activity
    Shell 0 0 0 0 Updated Oct 17, 2024
  • FOX-upstream Public Forked from FOX-Fuzz/FOX

    Coverage-guided Fuzzing as Online Stochastic Control

    fuzz-evaluator/FOX-upstream’s past year of commit activity
    C 0 Apache-2.0 7 0 0 Updated Oct 11, 2024
  • firmafl-eval Public
    fuzz-evaluator/firmafl-eval’s past year of commit activity
    Shell 4 2 1 1 Updated Jul 1, 2024
  • guidelines Public
    fuzz-evaluator/guidelines’s past year of commit activity
    57 1 1 0 Updated Mar 7, 2024
  • .github Public
    fuzz-evaluator/.github’s past year of commit activity
    0 0 0 0 Updated Feb 6, 2024
  • DARWIN-eval Public
    fuzz-evaluator/DARWIN-eval’s past year of commit activity
    HTML 1 0 0 0 Updated Dec 12, 2023
  • SoFi-eval Public
    fuzz-evaluator/SoFi-eval’s past year of commit activity
    0 0 0 0 Updated Sep 4, 2023
  • SoFi-upstream Public
    fuzz-evaluator/SoFi-upstream’s past year of commit activity
    JavaScript 0 0 0 0 Updated Aug 22, 2023
  • FishFuzz-eval Public
    fuzz-evaluator/FishFuzz-eval’s past year of commit activity
    Python 1 0 0 0 Updated Aug 4, 2023
  • statistics Public
    fuzz-evaluator/statistics’s past year of commit activity
    R 1 0 0 0 Updated Aug 4, 2023

Top languages

Loading…

Most used topics

Loading…