-
Notifications
You must be signed in to change notification settings - Fork 366
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
operations-per-run appears to be ignored #466
Comments
Indeed, the 4163 and 4431 consumed each 3 operations and yet, the limit was set to 1. Once this PR #463 is landed, we can add coverage for the operations which was not the case - at all - before. |
Thanks for the response, and thank you for your work on this extremely useful action! Looking deeper, I found #230 (comment), which explains some of my confusion. It seems the limit may only get checked between batches? I'm not sure that this option does adequate rate limiting, since:
IMO "rate limiting" in the current sense of "operations" (any API-level queries) would only be useful with something that throttles the operations per second/minute, not per run. In addition, an option to limit the number of mutations performed per script invocation would be extremely useful as a way to prevent bad configuration (or bugs in the |
I never checked this before, and now I did, so I can confirm that you are right, the check for remaining operations is done at the end of each batch of 100 issues/PRs processing. Regarding what you think is best, I reckon that having a second criterion in terms of second/minute would be something that may interest some consumers nonetheless since a run never take more than a minute this option seems clearly enough and the consumers should set this option accordingly. Regarding your suggestion for the mutations, the best way to achieve that regarding my comment above would be to simply split this option in half between mutations versus queries IMO. WDYT? |
I can provide a PR to fix this bug and if you need more fine-tuned option(s) I will let you open a new issue with the feature request template. |
Sorry, missed this update!
That seems reasonable.
This is something I would like, because:
Sounds good, thanks! |
…s limit Instead of processing an entire batch of 100 issues before checking the operations left, simply do it before processing an issue so that we respect as expected the limitation of the operations per run Fixes actions#466
…s limit Instead of processing an entire batch of 100 issues before checking the operations left, simply do it before processing an issue so that we respect as expected the limitation of the operations per run Fixes actions#466
…s limit (#474) * fix(operations): fail fast the current batch to respect the operations limit Instead of processing an entire batch of 100 issues before checking the operations left, simply do it before processing an issue so that we respect as expected the limitation of the operations per run Fixes #466 * test(debug): disable the dry-run for the test by default we will be able to test the operations per run and have more complete logs that could help us debug the workflow * chore(logs): also display the stats when the operations per run stopped the workflow * chore(stats): fix a bad stats related to the consumed operations * test(operations-per-run): add coverage * chore: update index
I set this to 1:
https://github.com/grpc/grpc-go/runs/2667832758?check_suite_focus=true#step:2:9
Yet 2 issues were marked as stale during this run:
https://github.com/grpc/grpc-go/runs/2667832758?check_suite_focus=true#step:2:117
https://github.com/grpc/grpc-go/runs/2667832758?check_suite_focus=true#step:2:194
The text was updated successfully, but these errors were encountered: