Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Behavior of --benchmark_filter="" doesn't match the documentation #736

Closed
derekmauro opened this issue Nov 30, 2018 · 6 comments
Closed

Comments

@derekmauro
Copy link
Member

There is a contradiction in the documentation of this flag and the behavior of the code. The documentation says the empty string runs no benchmarks, but the code actually runs all of them.

DEFINE_string(benchmark_filter, ".",
"A regular expression that specifies the set of benchmarks "
"to execute. If this flag is empty, no benchmarks are run. "
"If this flag is the string \"all\", all benchmarks linked "
"into the process are run.");

benchmark/src/benchmark.cc

Lines 349 to 351 in 99d1356

std::string spec = FLAGS_benchmark_filter;
if (spec.empty() || spec == "all")
spec = "."; // Regexp that matches all benchmarks

I think the correct behavior for the empty string should be to run none of them, as I think there might be CI systems that by default don't want to run time consuming benchmarks, and to do this they might want to override this flag with the empty string.

@dmah42
Copy link
Member

dmah42 commented Dec 3, 2018

You're right that this is a mistake in the documentation.

Running all of them is the expected design as that is the default case. See also gtest_filter in google test which works the same way.

@LebedevRI
Copy link
Collaborator

By default, obviously everything should be run.
But i think the question is about explicitly passing --benchmark_filter="".

@dmah42
Copy link
Member

dmah42 commented Dec 3, 2018

sorry, yes. i just reparsed that as i was updating the documentation :)

I think that is a good idea.

@LebedevRI
Copy link
Collaborator

LebedevRI commented Dec 3, 2018

(Actually, i'm not seeing any contradiction)
Yeah, i think the doc should be updated.
I'm not sure what is the expected outcome of not running any benchmarks.
You can turn the execution into no-op with --benchmark_list_tests=true e.g.

@dmah42
Copy link
Member

dmah42 commented Dec 3, 2018

Hang on, sorry, let me think about this again.

Why would you want to pass ""? If you're running the code but not running any of the benchmarks, what are you doing?

@derekmauro
Copy link
Member Author

derekmauro commented Dec 3, 2018

The short story is that I had a benchmark that I thought had to be declared cc_test (in Bazel) to get around a dependency check, but I didn't want the benchmark to actually run when bazel test was invoked because it would take too long. I found a solution for the dependency problem (cc_binary with testonly = 1), so for now simply updating the documentation is fine.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants