Description
@arielb1 and @Mark-Simulacrum have done awesome work getting perf.rust-lang.org up and running again. Unfortunately, the current setup does not seem to be handling incremental data correctly (in particular, if you view the graphs, you'll see that all the measurements for the incremental tests -- which have names like foo@010-bar
-- are zero). It'd be great to figure out what is going wrong! As a second goal, it'd be great to improve this setup and find better ways to visualize the data.
The current setup is described in the README on the rustc-benchmarks repository. The basic idea is that we invoke a series of make targets with funny names like all@010-bar
, and each one progressively mutates the setup in some way and then does a compilation (for example, they might apply a patch). This allows us to measure the effect of each change on compilation time.
To explain the idea, here are some of the regex targets:
regex-0.1.80@010-baseline
-- builds without incremental, to establish a baseline timeregex-0.1.80@020-incr-from-scratch
-- builds for the first time with incremental; comparing to the previous result gives us an idea of the base overhead for incremental trackingregex-0.1.80@030-compile_one
-- applies a patch that modifies a particular method; comparing to the010-baseline
result gives us an idea of the gain from using incremental versus comparing by scratch every time, or this particular change.
Perhaps @Mark-Simulacrum or @arielb1 can give some tips as to what might be wrong and how to go about debugging the problem? I'm not really sure (for example) what code perf
is running or how to make it run locally. I'm labeling this as "E-mentor" but volunteering them as mentors. =)