You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We've started experiencing an issue leading to reprocessing topic messages from start, when we changed Sarama's initial offset policy from OffsetNewest to OffsetOldest. First, we thought that the issue itself is about replacing one policy with another, though it appears that the problem is only about OffsetOldest policy.
I wrote a simple script that consumes messages from topic with OffsetOldest policy and prints it to stdout (attached to the issue). Produced 10 messages, started the worker, and when all the messages were successfully consumed, the consumer committed the latest message offset to consumer group - the behavior I expected. When you stop the consumer with Ctrl-C and restart it immediately, the consumer starts reading messages from the latest offset in group, everything is fine.
But, if you wait for several minutes before starting the consumer again, it will lead to resetting an offset to initial (-2 in case of Sarama) and reprocessing all the messages again.
In our case, it's a really critical issue, since with the reprocessing we have to consume several millions of messages, and it takes several hours to finish due to some computations we should do for every single message.
Seems like the issue relates to this one: IBM/sarama#2036
And one more question - should I open the same issue to Sarama?
Golang version: go version go1.21.1 linux/amd64
Sarama's version: github.com/IBM/sarama v1.41.2
Goka's version: github.com/lovoo/goka v1.1.9
to be honest I can't reproduce the behavior at all. It's never reconsumed, however long I waited.
My guess is it's a configuration issue on kafka's consumer-offsets topic. So not related to goka or sarama.
I tried it with the kafka-cluster set up by docker-compose in the examples-folder, everything worked as expected.
Did you use the same cluster? If not and you have access to kafka-tools in your cluster, maybe check the stored offsets for the test-group in the cluster. Maybe they're reset for some reason after some time?
Thanks for the answer! Last week we were trying to figure out what's going on in Kafka when this issue happens.
It appeared that a consumer group was deleted by Kafka after 10 minutes or so, when a processor was shut down. We've tried different Kafka versions until we realized the problem was inside Sarama group configuration.
Hi everyone,
We've started experiencing an issue leading to reprocessing topic messages from start, when we changed Sarama's initial offset policy from
OffsetNewest
toOffsetOldest
. First, we thought that the issue itself is about replacing one policy with another, though it appears that the problem is only aboutOffsetOldest
policy.I wrote a simple script that consumes messages from topic with
OffsetOldest
policy and prints it to stdout (attached to the issue). Produced 10 messages, started the worker, and when all the messages were successfully consumed, the consumer committed the latest message offset to consumer group - the behavior I expected. When you stop the consumer withCtrl-C
and restart it immediately, the consumer starts reading messages from the latest offset in group, everything is fine.But, if you wait for several minutes before starting the consumer again, it will lead to resetting an offset to initial (-2 in case of Sarama) and reprocessing all the messages again.
In our case, it's a really critical issue, since with the reprocessing we have to consume several millions of messages, and it takes several hours to finish due to some computations we should do for every single message.
Seems like the issue relates to this one: IBM/sarama#2036
And one more question - should I open the same issue to Sarama?
Golang version:
go version go1.21.1 linux/amd64
Sarama's version:
github.com/IBM/sarama v1.41.2
Goka's version:
github.com/lovoo/goka v1.1.9
The text was updated successfully, but these errors were encountered: