You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For many stable systems -- especially distributed systems -- jitter is very important to reduce synchronising things that shouldn't be synchronised. For example, say you trigger a large number of health checks once every 10 seconds, you probably don't want them to go out exactly every 10 seconds. That would cause load spikes in your and potentially the other system.
A NIO user can of course calculate jitter manually by doing scheduleRepeatedTask(..., .seconds(10) + .microseconds(Int64.random(in: 0..<10_000)) or so but that's kinda tedious.
It would be much better if once could schedule tasks with a maximum allowed jitter amount and NIO would do the jitter "calculation". For example
For many stable systems -- especially distributed systems -- jitter is very important to reduce synchronising things that shouldn't be synchronised. For example, say you trigger a large number of health checks once every 10 seconds, you probably don't want them to go out exactly every 10 seconds. That would cause load spikes in your and potentially the other system.
A NIO user can of course calculate jitter manually by doing
scheduleRepeatedTask(..., .seconds(10) + .microseconds(Int64.random(in: 0..<10_000))
or so but that's kinda tedious.It would be much better if once could schedule tasks with a maximum allowed jitter amount and NIO would do the jitter "calculation". For example
which means that a random
.zero ..< .milliseconds(10)
worth of jitter gets applied every iteration (different random values on each iteration).But even for single-shot scheduled tasks, we should support jitter.
The text was updated successfully, but these errors were encountered: