You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I have something like @b rand(1000) sort!, the first eval is much slower than subsequent evals within a given sample, which violates benchmarking assumptions and results in weird results. For example, @b rand(1000) sort! reports a super fast runtime while @b rand(100_000) sort! is realistic.
I guess most of these cases can be detected by systematically running a second evaluation after the first one? Of course it's debatable whether the benefit outweighs the cost
For seconds=0.1 (the default), we'll choose to run only a single eval if the runtime is greater than about 0.02% of the budget. In this case, there isn't actually an issue because evals=1. If the runtime is less than 0.02% of the budget, then it should be pretty cheap to perform this check.
For higher budgets, the sitaution is even better. For lower budgets, it seems reasonable to perform fewer sanity checks.
This is all assuming that runtime is dominated by evaluating the target function rather than by Chairmarks plumbing or by the setup or teardown functions.
If I have something like
@b rand(1000) sort!
, the first eval is much slower than subsequent evals within a given sample, which violates benchmarking assumptions and results in weird results. For example,@b rand(1000) sort!
reports a super fast runtime while@b rand(100_000) sort!
is realistic.See: compintell/Mooncake.jl#140
The text was updated successfully, but these errors were encountered: