-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Error if connections_max_idle_ms not larger than request_timeout_ms #1688
Conversation
94595b8
to
51c0dd4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 2 of 2 files at r1.
Reviewable status:complete! all files reviewed, all discussions resolved
Would a warning be more appropriate here? Is this configuration always doomed, or is it possible some power user would want do go against the grain in some circumstance? |
51c0dd4
to
fe8f61f
Compare
b13bcf5
to
9433abf
Compare
It's a great question, which I hadn't even considered when I first put this up. That said, I've thought about it several times and I cannot come up with a valid scenario... unlike Zookeeper, clients don't keep any session/request state that spans multiple connections. |
`connections_max_idle_ms` must always be larger than `request_timeout_ms` to avoid potentially unexpected behavior. Fix #1680.
9433abf
to
5954189
Compare
|
||
def test_fetch_max_wait_larger_than_request_timeout_raises(self): | ||
with pytest.raises(KafkaConfigurationError): | ||
KafkaConsumer(bootstrap_servers='localhost:9092', fetch_max_wait_ms=41000, request_timeout_ms=40000) | ||
KafkaConsumer(bootstrap_servers='localhost:9092', fetch_max_wait_ms=50000, request_timeout_ms=40000) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this was a simple change from 41000
to 50000
purely for improved readability/consistency with the other tests here, the test passes either way.
|
||
def test_request_timeout_larger_than_connections_max_idle_ms_raises(self): | ||
with pytest.raises(KafkaConfigurationError): | ||
KafkaConsumer(bootstrap_servers='localhost:9092', api_version=(0, 9), request_timeout_ms=50000, connections_max_idle_ms=40000) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not understand why this test fails with NoBrokerFound
if api_version=(0, 9)
is omitted... or rather, I realize it's a problem with the version probing timing out, but why does test_fetch_max_wait_larger_than_request_timeout_raises()
right above succeed without needing to specify the version?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that may have been a side effect of the old AND check. If you drop the api_version here I would expect the test to still pass
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC, I actually did try that exact thing this morning, and it still failed with the NoBrokerFound
error. That's what confuses me.
connections_max_idle_ms
must always be larger thanrequest_timeout_ms
to avoid potentially unexpected behavior.
Fix #1680.
This change is