This is not a random benchmark! ... or is it?
The suite was developed to track the performance progress of xorshift
, which
@AndreasMadsen and I co-developed during a
Node.js hackathon in Copenhagen.
The benchmark includes the following npm packages:
Math.random()
alea
mersenne
mt19937
(Native)uupaa.random.js
xorshift
Methodology: To keep comparisons consistent, all packages are benchmarked on their ability to generate doubles in the range [0, 1). If this is not provided with the package, normalisation is done in the appropiate wrapper.
Each package is sampled 100 times, each sample running 1e6
iterations and then normalising
the mean and standard deviation with the number of iterations, to get a measure
for the performance of a single operation. This, however, might be misleading
because all the operations of a single package are batched.
npm install random-benchmark
cd node_modules/random-benchmark
npm install
npm test
If you're developing your own RNG you may symlink the package into
random-benchmark/node_modules/
and write a wrapper so you can test it against
the suite.
The benchmark is strongly inspired by htmlparser-benchmark and levinstein-benchmark. It is composed of four layers:
index.js
is the general CLI interface. The available wrappers are loaded here and spawned as workers.worker.js
is responsible for taking a given wrapper and turning it into a benchmark as well as monitoring progress.benchmark.js
is the abstract "class", where the nitty-gritty details of running each wrapper is implemented, as well as calculating statistics usingsummary
.wrapper/*.js
is a file for each benchmark to run in the suite. A wrapper follows the signaturefn(iterations, callback)
, wherecallback
is a standard Node.js style callback.iterations
is how many times the operation should be repeated for the current sample.Benchmark
will repeat this several times to calculate a sample mean.