Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Carmine message-queue: allow throughput control #308

Open
sanguivore-easyco opened this issue May 29, 2024 · 4 comments
Open

Carmine message-queue: allow throughput control #308

sanguivore-easyco opened this issue May 29, 2024 · 4 comments

Comments

@sanguivore-easyco
Copy link

I have a use case where a third-party API I'm interacting with has a strict rate limit, and rather than responding with a 429 for each request past the limit, it responds with 429 for every request for ten seconds whenever the limit is exceeded.

Thus, I want to eagerly throttle my workers according to this limit. I imagine this can be useful in other cases where a dependency can only handle a certain level of throughput and lacks good resilience characteristics such that the consumer needs to handle load management.

Current Workaround

By setting :throttle-ms to a function against a redis-backed rate-limiter which returns the number of milliseconds to wait to pick up a job, you can get close to adhering to the rate limit. However, :throttle-ms is currently considered after picking up a message, so you end up throttling to + . Ideally, the rate limit settings do not need to know the number of workers.

Possible Solution Suggestions

  • Move the :throttle-ms check to happen before picking up a message
    • Possibly make this an option to :throttle-consideration of :before-dequeue vs :after-processing (the latter of which would be the default to maintain previous behavior)
  • Add an option when creating a worker, :work-rate-limiter, that accepts a single argument (a map with the keys :qname, :queue-size, and :worker) and returns true if the next message should be dequeued and processed. If it returns false, the work loop would progress as if processing was instantaneous, skipping the dequeue and handler call
    • Another possible option here would be for work-rate-limiter to return a number of milliseconds before checking again, and processing the message would only occur if the response of the limiter function were <= 0

Clojurians' Slack thread for extra context: https://clojurians.slack.com/archives/C05M8GRL65Q/p1717003308524139?thread_ts=1716841423.174899&cid=C05M8GRL65Q

@ptaoussanis
Copy link
Member

@sanguivore-easyco Thanks for the nice, clear issue - and for the solution suggestions - that's helpful 🙏

Definitely keen to get something like this in. Have only thought about it for a moment, but my first inclination would be closest to your third suggestion.

For simplicity/symmetry, what do you think of calling it something like :pre-throttle with the same args and semantics as :throttle:

  `:throttle-ms`      - msecs, or (fn [queue-size])=>?msecs (default `default-throttle-ms-fn`)
  `:pre-throttle-ms`  - msecs, or (fn [queue-size])=>?msecs (default `nil`)

What do you think?

@sanguivore-easyco
Copy link
Author

@sanguivore-easyco Thanks for the nice, clear issue - and for the solution suggestions - that's helpful 🙏

Definitely keen to get something like this in. Have only thought about it for a moment, but my first inclination would be closest to your third suggestion.

For simplicity/symmetry, what do you think of calling it something like :pre-throttle with the same args and semantics as :throttle:

  `:throttle-ms`      - msecs, or (fn [queue-size])=>?msecs (default `default-throttle-ms-fn`)
  `:pre-throttle-ms`  - msecs, or (fn [queue-size])=>?msecs (default `nil`)

What do you think?

That sounds great to me! This would definitely work for my use case, and remains reasonably straightforward for documentation and the like.

@ptaoussanis
Copy link
Member

@sanguivore-easyco Thanks for the confirmation 👍 What's your level of urgency on this? Are you happy with your workaround in the meantime, or is that troublesome?

@sanguivore-easyco
Copy link
Author

@sanguivore-easyco Thanks for the confirmation 👍 What's your level of urgency on this? Are you happy with your workaround in the meantime, or is that troublesome?

My workaround will be fine for several months in all likelihood, and with your design, it's trivial for me to update once the change is in :) No rush

# for free to join this conversation on GitHub. Already have an account? # to comment
Projects
None yet
Development

No branches or pull requests

2 participants