-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Response future not dropped on disconnect #1716
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Comments
I haven't really investigated, could the fix for #1717 have fixed this too? |
I will try and report back tomorrow. |
I tried. It's not fixed. The way to reproduce it:
|
I think the bug is on the server, not the client: the broker is the server and it does not drop the future on disconnect from the client (worker) |
When I run it with trace log-level, the messages after I cancel the worker are:
... and nothing else. So looks like nobody is polling the connection while the response future is waiting on the oneshot::Receiver for the Body. |
The connection isn't waiting on a read, due to this line: https://github.com/hyperium/hyper/blob/master/src/proto/h1/conn.rs#L231 It's basically to apply TCP backpressure, to not start reading a second message while waiting to respond to the first. If there were a way to register for |
Hm, will look how it is implemented on the h2 side and try to come with a patch. |
mio supports HUP registration via UnixReady, but that's not piped through Tokio. Adding |
In h2, it's always reading, since it needs to in order to handle HTTP2 bookkeeping, since frames can come in related to any stream (or for the connection). We could try adding a |
Just for test, I disabled the check for
|
Ah sure, if it were to close the connection immediately, then connections that half-close after writing the request would never be able to read the response... |
Yes, you are right. |
A possible solution is to offer some sort of "support half-closed" config option (default would need to be true to not break existing behavior), and if false, then that keep-alive read can forcibly close future down. |
This option determines whether a read EOF should close the connection automatically. The behavior was to always allow read EOF while waiting to respond, so this option has a default of `true`. Setting this option to `false` will allow Service futures to be canceled as soon as disconnect is noticed. Closes #1716
This option determines whether a read EOF should close the connection automatically. The behavior was to always allow read EOF while waiting to respond, so this option has a default of `true`. Setting this option to `false` will allow Service futures to be canceled as soon as disconnect is noticed. Closes #1716
This option determines whether a read EOF should close the connection automatically. The behavior was to always allow read EOF while waiting to respond, so this option has a default of `true`. Setting this option to `false` will allow Service futures to be canceled as soon as disconnect is noticed. Closes #1716
This option determines whether a read EOF should close the connection automatically. The behavior was to always allow read EOF while waiting to respond, so this option has a default of `true`. Setting this option to `false` will allow Service futures to be canceled as soon as disconnect is noticed. Closes #1716
This option determines whether a read EOF should close the connection automatically. The behavior was to always allow read EOF while waiting to respond, so this option has a default of `true`. Setting this option to `false` will allow Service futures to be canceled as soon as disconnect is noticed. Closes #1716
This option determines whether a read EOF should close the connection automatically. The behavior was to always allow read EOF while waiting to respond, so this option has a default of `true`. Setting this option to `false` will allow Service futures to be canceled as soon as disconnect is noticed. Closes #1716
thanks! |
In certain cases I see Response future is not dropped when the connection closes. I have a minimal reproduction here: https://github.com/luben/repro-hyper
The reproduction is simplified work queue - workers wait on
GET /next
, clients submit work toPOST /
, worker submit response toPOST /response
. What happens is that if a worker restarts, it is left registered but disconnected and it leads to next client request to hang. I added a small debugging class that tells me when the futures are dropped. It confirms that the response future from /next is not dropped on worker disconnect.Changing the workers to use HTTP/2 instead of HTTP/1.1 (un-comment https://github.com/luben/repro-hyper/blob/master/src/bin/worker.rs#L16) solves the issue.
The text was updated successfully, but these errors were encountered: