You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Configure your deployment to use Cluster.Strategy.Kubernetes.DNS, within K8s service discovery.
Scale up the deployment to have more than one Pod (Let us consider we have node1, node2, and node3 connected to the topology).
Down-scale the deployment (Let us consider the K8S decide to terminate the node2).
Now, the other nodes (node1, node3) will disconnected from node2
While node2 is doing the Graceful shutdown, and it is now out of the service discovery, it will notice the other nodes(node1, node3), while polling them from the service discovery and it will re-connected to them !!!, and by that the orher node re-connected to the node2 again until the graceful shutdown done!
Description of issue
When I scaled down the deployment, I supposed the libclutser should disconnect the node when the k8s pod starts to terminate, but after the node was disconnected, it went back again to the cluster! and be visible and I can see it by Node.list() until the be pod is fully terminated!.
What are the expected results?
When the Node is not a part of the Topology(For example K8S service discovery), it should not be reconnected to the other nodes
Suggested Solution & Discussion .
To address that, I cloned the original Strategy (Cluster.Strategy.Kubernetes.DNS), And I patched it by adding a validation during the connection process, by validating the source node needed to connect to the other nodes must be a member of the topology(In my case it is must be one of the nodes back from the nslookup discovery), is that a correct solution? can we collaborate to make a general fix to be fit with the libcluster core?
The text was updated successfully, but these errors were encountered:
Steps to reproduce
Description of issue
When I scaled down the deployment, I supposed the libclutser should disconnect the node when the k8s pod starts to terminate, but after the node was disconnected, it went back again to the cluster! and be visible and I can see it by Node.list() until the be pod is fully terminated!.
What are the expected results?
When the Node is not a part of the Topology(For example K8S service discovery), it should not be reconnected to the other nodes
Suggested Solution & Discussion .
To address that, I cloned the original Strategy (Cluster.Strategy.Kubernetes.DNS), And I patched it by adding a validation during the connection process, by validating the source node needed to connect to the other nodes must be a member of the topology(In my case it is must be one of the nodes back from the nslookup discovery), is that a correct solution? can we collaborate to make a general fix to be fit with the libcluster core?
The text was updated successfully, but these errors were encountered: