Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Connection level memory limits #298

Open
zenhack opened this issue Aug 27, 2022 · 4 comments
Open

Connection level memory limits #298

zenhack opened this issue Aug 27, 2022 · 4 comments

Comments

@zenhack
Copy link
Contributor

zenhack commented Aug 27, 2022

As discussed in #194, we should have the ability to specify connection-level memory limits, to protect against DoS. In particular, we want:

  1. "soft" limit on the size of unreleased incoming Call messages. Reading from the transport should block if we are above this limit.
  2. A "hard" limit on all memory allocated by the transport; if we exceed this we should drop the connection entirely.

We should not reach these limits under normal circumstances; cooperative flow control should prevent them. Notes:

  • In principle, (1) could cause deadlocks; the scenario where this occurs is if you have mutually recursive calls bouncing back and forth across a connection until you hit a limit -- so this is somewhat like a stack overflow, except you get a deadlock instead of a panic :/.
  • (1) is insufficient to protect from DoS attacks, since it does not account for (potentially large) return messages for which no finish message has come in.

I suggested adding these to rpc.Options, but I think the cleanest way to implement these is as Transports that impose the limits -- I don't think any of the rest of the rpc subsystem needs to change. We could add these to rpc.Options anyway as a convenience, if we wanted.

We should also figure out how to make the API "safe by default" in this regard.

@zenhack
Copy link
Contributor Author

zenhack commented Aug 27, 2022

I'm going to stick this on the 3.0 milestone; making this safe by default might involve adding default limits, which could otherwise be a breaking change, so best to do that before we stabilize.

@zenhack zenhack added this to the 3.0 milestone Aug 27, 2022
@lthibault
Copy link
Collaborator

Instead of deadlocking when the limit is reached, could we close the connection and log a warning? I'd prefer to have this fail noisily.

@zenhack
Copy link
Contributor Author

zenhack commented Aug 27, 2022

That sounds like just having limit 2? We could also maybe have a timeout for the first one, (or both) after which it drops the connection.

@lthibault
Copy link
Collaborator

We could also maybe have a timeout for the first one, (or both) after which it drops the connection.

Yeah, this is the kind of thing I had in mind. Having a hard-vs-soft limit makes sense, but I'd also like the soft limit to become a hard one when it deadlocks. A timeout mechanism seems like a sensible way to do this. 👍

# for free to join this conversation on GitHub. Already have an account? # to comment
Projects
None yet
Development

No branches or pull requests

2 participants