-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
6fb7715 broke pull consumer - possibly because of out of order acks? [v2.10.17] #5720
Comments
Can you please provide a |
@paolobarbolini were you using the same consumer name in the leafnode server and the remote leafnode? |
Hey, sorry for the radio silence. Here's the info you've requested about the leafnode: info about the current broken systemstream info
consumer info
Note on the 1001: it was successfully able to poll 1 item and then 1000 items. The following polls all yielded no items. server info
Built with Go 1.22.6
No they have different names. The stream names are the same though. Testing with 6fb7715 revertedbuildgit clone https://github.com/nats-io/nats-server.git -b release/v2.10.20
cd nats-server/
git revert 6fb7715a46b5f64dff144f0357f009ad3bf64473
GOOS=linux GOARCH=arm64 GO111MODULE=on CGO_ENABLED=0 go build -trimpath -ldflags="-w -X 'github.com/nats-io/nats-server/v2/server.gitCommit=$(git rev-parse --short HEAD)'" -o nats-server-arm64
# deploy and IMPORTANT: delete the consumer stream info
consumer info(after a few minutes)
|
I will try to provide a minimal test case, but I fall on the same issue. If my client do the explicit acks in the order there is no problem but if my client process schedule the acks asynchronously (which doesn't garantee the order, or by explicitly forcing a random acknowledging), I'm starting to experiencing some acknowledge issues: (Outstanding Acks: 1,000 out of maximum 1,000) |
Observed behavior
After upgrading from v2.10.16 one of our arm64 leafnodes pull consumer stopped working. As new messages came into the stream, the
NEXT
endpoint continued returning 404 despite not having reached the pending acks limits. This has been bisected to 6fb7715 after v2.10.18 and v2.10.19-RC.2 exhibit the same problem. To double check we went on v2.10.19-RC.2 and reverted the suspect commit, which fixed the issue.Expected behavior
The consumer continues delivering messages.
Server and client version
nats-server: v2.10.17
nats: v0.1.5
Host environment
Steps to reproduce
The pull consumer is on a stream mirroring
$MQTT_msgs
, with the rest of the settings left to default.I'm not sure how to reproduce it. It doesn't seem to happen on our other machines. The only two things I can think of are:
$MQTT_msgs
Strangely if I delete the consumer and recreate it all messages that are already in the stream get delivered, but the issue still occurs on new messages.
The text was updated successfully, but these errors were encountered: