-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Track two more details about runs - the aggregate name, and run name. #675
Conversation
This is related to BaaMeow's work in google#616 but is not based on it. Two new fields are tracked, and dumped into JSON: * If the run is an aggregate, the aggregate's name is stored. It can be RMS, BigO, mean, median, stddev, or any custom stat name. * The aggregate-name-less run name is additionally stored. I.e. not some name of the benchmark function, but the actual name, but without the 'aggregate name' suffix. This way one can group/filter all the runs, and filter by the particular aggregate type. I *might* need this for further tooling improvement. Or maybe not. But this is certainly worthwhile for custom tooling.
❌ Build benchmark 1433 failed (commit a6af2ed1f6 by @LebedevRI) |
And this is why two build systems is a horrible idea. |
❌ Build benchmark 1434 failed (commit 583c4ef1b5 by @LebedevRI) |
src/complexity.cc
Outdated
std::string benchmark_name = | ||
reports[0].benchmark_name.substr(0, reports[0].benchmark_name.find('/')); | ||
|
||
std::string benchmark_name = reports[0].benchmark_name().substr( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should now be run_name
right? at least, you assign it to run_name
below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, right, sorry.
I think this run_name
will be kinda broken for these range-based complexity measurements..
Thank you for the review! |
…google#675) This is related to @baameow's work in google#616 but is not based on it. Two new fields are tracked, and dumped into JSON: * If the run is an aggregate, the aggregate's name is stored. It can be RMS, BigO, mean, median, stddev, or any custom stat name. * The aggregate-name-less run name is additionally stored. I.e. not some name of the benchmark function, but the actual name, but without the 'aggregate name' suffix. This way one can group/filter all the runs, and filter by the particular aggregate type. I *might* need this for further tooling improvement. Or maybe not. But this is certainly worthwhile for custom tooling.
This is related to @baameow's work in #616 but is not based on it.
Two new fields are tracked, and dumped into JSON:
It can be RMS, BigO, mean, median, stddev, or any custom stat name.
I.e. not some name of the benchmark function, but the actual
name, but without the 'aggregate name' suffix.
This way one can group/filter all the runs,
and filter by the particular aggregate type.
I might need this for further tooling improvement.
Or maybe not.
But this is certainly worthwhile for custom tooling.