-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Kafka Requester Produce/Consume Concerns #5
Comments
I discovered this package earlier today (was reading your Benchmarking Commits Logs post), and after studying its source code, I had the same observation as @eapache regarding the producer that you are using. I am not sure why you're going with an @eapache, regarding your second observation — This raised a flag here originally as well. If you inspect the code closely however, you'll see that every process is posting to its own topic. So effectively there is a "lock down" going on and you're guaranteed that the consumer is returning the message that was produced just before. This brings me to my third observation — the |
The Requester in this case is measuring the end-to-end latency from when a message is published to when it's read. It does this by publishing a message and then immediately waiting for it. IIRC there was no significant difference between using an AsyncProducer and a SyncProducer due to the nature of just publishing and waiting for the published message. The difference was more significant for the throughput test, but to make that meaningful we wait for all the acks to be received before considering the publisher finished (https://github.com/tylertreat/log-benchmarking/blob/master/cmd/throughput/benchmark/kafka.go). |
It's not super-clear to me exactly what kind of round-trip behaviour you're trying to model, but I suspect the kafka requester isn't doing exactly what you think it's doing (or what you want it to do) for a few reasons:
SyncProducer
instead and drop the consumer entirely.Request
concurrently, then you're probably OK, but I'm not sure.The text was updated successfully, but these errors were encountered: