Skip to content

Commit

Permalink
918 Mark scaled_pedestal tests as flaky
Browse files Browse the repository at this point in the history
#918

[author: gonzaponte]

These tests fail fairly frequently because they rely on statistics. We
will review them (#919), but for the time being, they are marked as
flaky, so they don't bother us when they fail sporadically.

[reviewer: jwaiton]

A simple fix that improves testing consistency, approved.
  • Loading branch information
jwaiton authored and carhc committed Nov 13, 2024
2 parents f2e52af + ec748d6 commit 3a52d4f
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions invisible_cities/calib/spe_response_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -191,6 +191,7 @@ def dark_spectrum_global():
return parameters, pedestal + signal


@flaky(max_runs=2)
def test_scaled_dark_pedestal_pedestal(dark_spectrum_local):
(bins, nsamples, scale, poisson_mean,
pedestal_mean, pedestal_sigma,
Expand All @@ -211,6 +212,7 @@ def test_scaled_dark_pedestal_pedestal(dark_spectrum_local):
assert np.all(in_range(pull, -2.5, 2.5))


@flaky(max_runs=2)
def test_scaled_dark_pedestal_spe(dark_spectrum_global):
# Test that the spectrum we get is identical ignoring the pedestal
(bins, nsamples, scale, poisson_mean,
Expand Down

0 comments on commit 3a52d4f

Please # to comment.