-
-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Documentation for server.Policy does not agree with implementation #194
Comments
Yeah, looks like I introduced that in 27748fa without updating docs. Sorry.
I agree, I did spill a lot of words at the time I was crunching through this that might help explain: https://groups.google.com/g/capnproto/c/A4LiXXd1t94/m/GkFH1AK-AgAJ The fundamental insight that I have is that Cap'n Proto RPC is a work plan stream across all capabilities. You necessarily need to queue under the assumption traffic will be bursty, but there are situations that can deadlock if not enough work resolves in time. IIUC, if there's protocol-level support for applying backpressure to individual objects while preserving E-Order, that would be better than the state of the world when I was trying to hack in An unbounded queue sounds potentially dangerous, though, but I think you're addressing that with the |
Right, the idea is that rather than relying on a queue bound for memory limiting, we'd track it manually. But we would expect under normal circumstances we would never reach the memory limit; it's not for backpressure, it's a stopgap for abuse, and we just drop the connection if it is breached. We rely on cooperative mechanisms for "normal" backpressure. See also #198, where I started working on per-object backpressure. I vaguely remember the mailing list thread. See also https://groups.google.com/g/capnproto/c/wbGvhHaBan4 which from a couple weeks ago. |
Cool. Sounds like the right direction; let me know if there's anything else I can help with. |
Everything on my roadmap above is done except for (1) the connection-level limits and (2) adaptive flow control (which to be fair the C++ implementation doesn't really have anyway; we already have a non-adaptive version). (2) is in progress as #294. I'm going to open a separate issue for the connection limits and close this one. |
The documentation for
server.Policy
indicates that when the limits are exceeded calls will immediately return exceptions. However, when pairing to debug #189, @lthibault and I discovered this is not actually what happens. Instead, the receive loop just blocks until an item in the queue clears.However, I do not actually think what the documentation describes is what we want; this would mean lost messages if the sender is sending too fast, and it seems like it would be challenging to program around. We really want some kind of backpressure like that provided by the C++ implementation's streaming support.
I propose the following roadmap for dealing with memory/backpressure instead:
server.Policy
type entirely.rpc.Options
rpc.Options
, add a field for a hard limit on memory consumed by outstanding rpc messages (of all types, including returns), after which the connection will just be dropped.Thoughts?
@zombiezen, particularly since I'm proposing removing something you added for v3, I would appreciate your thoughts.
The text was updated successfully, but these errors were encountered: