TLS Handshake error between Nats pod and kube-system
pods
#921
Labels
defect
Suspected defect such as a bug or regression
kube-system
pods
#921
What version were you using?
helm version: 1.2.1
Nats: nats:2.10.17-alpine
What environment was the server running in?
Azure AKS k8s version 1.28.9
Nats chart included as sub-chart of main release, so the following config gets merged with nat's default
values.yaml
Nats section of the
values.yaml
The
*tlsCertSecretName
is a Helm alias to the name of a manually deployed Cert:Is this defect reproducible?
Reliably occurs with the above config on our AKS cluster. Haven't tested on other K8s varieties.
Given the capability you are leveraging, describe your expectation?
I am attempting to run a simple single node nats instance without clustering (at this stage). The goal is to deploy it with a pre-configured operator, account, and user, and TLS required for connections coming in from outside the K8s cluster.
Further down the line I will need to configure clustering, but TLS and mem-resolver based auth are the requirements for this stage of work.
I'd like to get the pod running healthily without throwing TLS errors.
I was on the fence whether to label this as a defect, I'm not sure if this is just a misunderstanding on my part of how TLS is supposed to be configured for the Nats helm chart. Happy to change the category if it looks like the error's on my end, I could just really use some guidance on how to get this sorted!
Given the expectation, what is the defect you are observing?
With the above config, I get the following in the nats pod logs. It seems as though the nats pod is attempting to dial pods in the
kube-system
namespace but failing to do so (assuming the directionality from the -> in the log message).10.244.14.232
is the pod IP of the nats instnaceAll of the IPs that it attempting to dial look like
kube-system
pods, some of which share the same internal cluster ipThis is where my understanding starts to break down, I'm not sure why nats would be dialing pods in the
kube-system
namespace. Would really appreciate any guidance you can provide! It almost feels like setting thenats.tls.enabled
broke something with nats talking to other in-cluster pods.I looked at examples like https://gist.github.com/wallyqs/d9c9131a5bd5e247b2e4a6d4aac898af, but it seemed as though they were aimed at nats clustering. As far as I understand, it doesn't seem like I need self-signed routes config for a single pod deploy?
The text was updated successfully, but these errors were encountered: