-
Notifications
You must be signed in to change notification settings - Fork 506
Deal with GitHub's secondary rate limit #2355
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Comments
Instead of `(Boolean, List[Update.Single])` where the first element designate if the second is empty, we use `Option[Nel[UpdateState.WithUpdate]]` instead. This is a little bit nicer since we don't need to interpret the `Boolean` but just traverse the `Option`. Having the `UpdateState`s available in `StewardAlg` will probably also come in handy when dealing with #2355. I imagine that some kind of rate limiter in `StewardAlg` will solve that issue.
Hello, we're trying to run a local version of scala-steward against approx 30-35 repos, but given how out of date they are, we're frequently getting i acknowledge that github isn't returning the correct header but is there any config or setting we can tweak to hopefully mitigate this until a full fix is in place? |
Looks like github returns
Actual response I received:
|
You got this header in response to a |
Ah, good point. So #2540 would only fix a subset of these issues. The majority of the failures I'm seeing are actually from
|
Regarding #2540, code LGTM 👍
I wonder adding |
Thanks for the review of #2540! The document also says:
Implementing both of the github best practices seems reasonable to me--probably best as separate PRs.
It's true that not all requests would return Did you mean this as a blocker for #2540? Is there anything else I can do to address concerns so that this can be merged? We have a long list of repos we run scala-steward against and it's currently failing for us at the end of that list. I'd love to see a fix merged one way or another. |
I've merged scala-steward-org/scala-steward-action#335 I keep this issue open until we have solutions for other cases. |
We are still struggling with this. I wonder if the Retry-After from the other endpoints gives us some empirical guess as to what would be an appropriate time to sleep? Like, is it always a fixed duration, or is it (more likely) some token bucket and varies wildly from run to run? |
Or if we know whether we're talking on the order of resting for a couple seconds vs "a few minutes" like the message. A couple seconds seems worthwhile. "A few minutes" could really add up on runners where we pay by the minute. |
I don't maintain an instance that works with GitHub anymore, but still have logs of @scala-steward from March and April 2022. I looked at four occasions where it hit the secondary rate limit and it was able to create PRs again after ~ 40, 30, 8, and 40 minutes. |
I don't know if waiting for a few seconds or minutes between PRs would have meant that it would have never hit the secondary rate limit. |
My public Scala Steward instance has repeatedly hit GitHub's secondary rate limit recently when creating PRs:
GitHub docs about the secondary rate limit are here:
The response doesn't include a
Retry-After
header, so we don't know how long Scala Steward should wait before creating the next PR.The text was updated successfully, but these errors were encountered: