Modern Kubernetes clusters can operate with both IPv4 and IPv6 networking simultaneously, a feature known as DualStack networking. This dual-protocol approach offers several advantages:
-
It allows pods and services to communicate using either IPv4 or IPv6 addresses.
-
It helps organizations transition gradually from IPv4 to IPv6.
-
It enables native communication with both legacy IPv4 systems and modern IPv6 infrastructure.
Starting with Kubernetes 1.21, DualStack networking is enabled by default, making it easier to deploy applications that can handle both IPv4 and IPv6 traffic. This guide will walk you through setting up a DualStack-enabled Kubernetes cluster from scratch.
A DualStack-enabled Kubernetes cluster provides:
-
Dual IP Pod Networking: Each Pod receives both an IPv4 and IPv6 address, enabling communication over either protocol.
-
Protocol-Flexible Services: Kubernetes Services can listen on both IPv4 and IPv6 addresses.
-
Dual-Protocol External Access: Pods can reach external resources (including the internet) using either IPv4 or IPv6.
Before implementing DualStack, ensure you have:
-
Kubernetes Version: 1.20 or newer (later versions recommended for improved stability)
-
Infrastructure Support: Your infrastructure provider (cloud or on-premises) must support dual-protocol networking and be able to assign both IPv4 and IPv6 addresses to nodes
-
Compatible CNI Plugin: Your Container Network Interface (CNI) plugin must explicitly support DualStack operations. Common options include:
- Calico
- Cilium
- Weave Net
- Flannel (with specific configurations)
Table of Contents
-
Create Azure Virtual Machines for the Kubernetes Cluster
-
Create the Kubernetes Cluster
-
Install Calico Enterprise
-
IPv6 configuration validation Validate node addressing Validate Pod IP Addresses
- Using kubectl
- Using the Downward API
- Using /etc/hosts
Validating Services Single-stack service
Single-stack IPv6 service
Preferred Dual-Stack Service
Dual-Stack Service with IPv6 Preference
-
Let’s start by defining some variables:
export RG=rmartins-dualstack export LOCATION=westus2
-
Create an Azure Resource Group for all resources.
az group create --name $RG --location $LOCATION
-
Create an ssh-key to access the VMs in the resource group.
az sshkey create --name ssh-key --resource-group $RG
Note the key pair name generated:
Example output:
No public key is provided. A key pair is being generated for you. Private key is saved to "/Users/regis/.ssh/1735836400_1739929". Public key is saved to "/Users/regis/.ssh/1735836400_1739929.pub". Change the permissions for the generated key:
Example command:
chmod 400 ~/.ssh/1735836400_1739929
-
Create the network resources for the VMs.
# create a network security group for the kubernetes vms az network nsg create --resource-group $RG --name k8s-nsg # create nsg rules to allow ssh and nodeport traffic az network nsg rule create \ --resource-group $RG \ --nsg-name k8s-nsg \ --name AllowSSH \ --priority 1000 \ --access Allow \ --protocol Tcp \ --direction Inbound \ --source-address-prefixes '*' \ --source-port-ranges '*' \ --destination-address-prefixes '*' \ --destination-port-ranges 22 az network nsg rule create \ --resource-group $RG \ --nsg-name k8s-nsg \ --name AllowCustomPorts \ --priority 1100 \ --access Allow \ --protocol Tcp \ --direction Inbound \ --source-address-prefixes '*' \ --source-port-ranges '*' \ --destination-address-prefixes '*' \ --destination-port-ranges 30000-32747 # create a vnet and subnets for the kubernetes cluster with IPv6 and IPv4 addresses az network vnet create \ --resource-group $RG \ --location $LOCATION \ --name k8s-vnet \ --address-prefixes 10.0.0.0/16 fd00:db8:0::/48 \ --subnet-name k8s-subnet \ --subnet-prefixes 10.0.0.0/24 fd00:db8:0:0::/64 \ --network-security-group k8s-nsg
-
Create network IP addresses for each node - IPv4 and IPv6
# k8s-cp az network public-ip create \ --resource-group $RG \ --name k8s-cp-PublicIP-Ipv4 \ --sku Standard \ --version IPv4 \ --zone 1 2 3 az network public-ip create \ --resource-group $RG \ --name k8s-cp-PublicIP-Ipv6 \ --sku Standard \ --version IPv6 \ --zone 1 2 3 # k8s-wk1 az network public-ip create \ --resource-group $RG \ --name k8s-wk1-PublicIP-Ipv4 \ --sku Standard \ --version IPv4 \ --zone 1 2 3 az network public-ip create \ --resource-group $RG \ --name k8s-wk1-PublicIP-Ipv6 \ --sku Standard \ --version IPv6 \ --zone 1 2 3 # k8s-wk2 az network public-ip create \ --resource-group $RG \ --name k8s-wk2-PublicIP-Ipv4 \ --sku Standard \ --version IPv4 \ --zone 1 2 3 az network public-ip create \ --resource-group $RG \ --name k8s-wk2-PublicIP-Ipv6 \ --sku Standard \ --version IPv6 \ --zone 1 2 3
-
Create network interface for each node, attaching the IP addresses created.
# k8s-cp az network nic create \ --resource-group $RG \ --name k8s-cp-NIC \ --vnet-name k8s-vnet \ --subnet k8s-subnet \ --network-security-group k8s-nsg \ --ip-forwarding true \ --public-ip-address k8s-cp-PublicIP-Ipv4 az network nic ip-config create \ --resource-group $RG \ --name k8s-cp-IPv6config \ --nic-name k8s-cp-NIC \ --private-ip-address-version IPv6 \ --vnet-name k8s-vnet \ --subnet k8s-subnet \ --public-ip-address k8s-cp-PublicIP-Ipv6 # k8s-wk1 az network nic create \ --resource-group $RG \ --name k8s-wk1-NIC \ --vnet-name k8s-vnet \ --subnet k8s-subnet \ --network-security-group k8s-nsg \ --ip-forwarding true \ --public-ip-address k8s-wk1-PublicIP-Ipv4 az network nic ip-config create \ --resource-group $RG \ --name k8s-wk1-IPv6config \ --nic-name k8s-wk1-NIC \ --private-ip-address-version IPv6 \ --vnet-name k8s-vnet \ --subnet k8s-subnet \ --public-ip-address k8s-wk1-PublicIP-Ipv6 # k8s-wk2 az network nic create \ --resource-group $RG \ --name k8s-wk2-NIC \ --vnet-name k8s-vnet \ --subnet k8s-subnet \ --network-security-group k8s-nsg \ --ip-forwarding true \ --public-ip-address k8s-wk2-PublicIP-Ipv4 az network nic ip-config create \ --resource-group $RG \ --name k8s-wk2-IPv6config \ --nic-name k8s-wk2-NIC \ --private-ip-address-version IPv6 \ --vnet-name k8s-vnet \ --subnet k8s-subnet \ --public-ip-address k8s-wk2-PublicIP-Ipv6
-
Finally, create the virtual machines for each node.
Use the custom data for each node type: control-plane (cp.sh) and worker (wk.sh)
One requirements to setup the virtual machines right for a dual-stack kubernetes installation is to enable IPv4 AND IPV6 packet forwarding:
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.ipv4.conf.all.forwarding = 1 net.ipv6.conf.all.forwarding = 1 EOF # Apply sysctl params without reboot sudo sysctl --system
It is covered in the custom-data scripts.
From the folder where you saved the custom data scripts (wk.sh and cp.sh) run the following commands to create the VMs.
# k8s-cp az vm create \ --resource-group $RG \ --name k8s-cp \ --nics k8s-cp-NIC \ --image Ubuntu2404 \ --size Standard_B2ms \ --admin-username azureuser \ --ssh-key-name ssh-key \ --os-disk-size-gb 30 \ --storage-sku Premium_LRS \ --custom-data ./cp.sh # k8s-wk1 az vm create \ --resource-group $RG \ --name k8s-wk1 \ --nics k8s-wk1-NIC \ --image Ubuntu2404 \ --size Standard_B2ms \ --admin-username azureuser \ --ssh-key-name ssh-key \ --os-disk-size-gb 30 \ --storage-sku Premium_LRS \ --custom-data ./wk.sh # k8s-wk2 az vm create \ --resource-group $RG \ --name k8s-wk2 \ --nics k8s-wk2-NIC \ --image Ubuntu2404 \ --size Standard_B2ms \ --admin-username azureuser \ --ssh-key-name ssh-key \ --os-disk-size-gb 30 \ --storage-sku Premium_LRS \ --custom-data ./wk.sh
After creating the virtual machines, save the public IP addresses for logging into them and finish the Kubernetes cluster configuration.
Example output:
{
"fqdns": "",
"id": "/subscriptions/03cfb895-358d-4ad4-8aba-aeede8dbfc30/resourceGroups/rmartins-dualstack/providers/Microsoft.Compute/virtualMachines/k8s-cp",
"location": "westus2",
"macAddress": "00-22-48-BF-82-DB",
"powerState": "VM running",
"privateIpAddress": "10.0.0.4,fd00:db8::4",
"publicIpAddress": "4.149.139.34,2603:1030:c04:e::b",
"resourceGroup": "rmartins-dualstack",
"zones": ""
}
{
"fqdns": "",
"id": "/subscriptions/03cfb895-358d-4ad4-8aba-aeede8dbfc30/resourceGroups/rmartins-dualstack/providers/Microsoft.Compute/virtualMachines/k8s-wk1",
"location": "westus2",
"macAddress": "00-22-48-C0-CD-C4",
"powerState": "VM running",
"privateIpAddress": "10.0.0.5,fd00:db8::5",
"publicIpAddress": "4.149.139.68,2603:1030:c04:e::e",
"resourceGroup": "rmartins-dualstack",
"zones": ""
}
{
"fqdns": "",
"id": "/subscriptions/03cfb895-358d-4ad4-8aba-aeede8dbfc30/resourceGroups/rmartins-dualstack/providers/Microsoft.Compute/virtualMachines/k8s-wk2",
"location": "westus2",
"macAddress": "00-0D-3A-FC-77-CD",
"powerState": "VM running",
"privateIpAddress": "10.0.0.6,fd00:db8::6",
"publicIpAddress": "4.149.139.82,2603:1030:c04:e::12",
"resourceGroup": "rmartins-dualstack",
"zones": ""
}
-
Log into the control plane first using the public ip address and the key previously generated.
ssh -i ~/.ssh/1735836400_1739929 azureuser@4.149.139.34
Escalate your privilege to root user.
sudo su - root
Retrieve the kubeadm join command from the /var/log/cloud-init-output.log
grep -e 'kubeadm\|discovery' /var/log/cloud-init-output.log
Copy and save the line with the token on it
kubeadm join 10.0.0.4:6443 --token qwc4jt.h8u7m63ulcxkwnel --discovery-token-ca-cert-hash sha256:e33d7f81aed91ef221cac88b0b864d3c87f2e6cc0d521ac18cdc6096902c0941
-
Log into each one of the worker nodes and run the kubeadm command to join them to the control-plane, creating the cluster.
ssh -i ~/.ssh/1735836400_1739929 azureuser@4.149.139.68 # escalate privilege to root user sudo su - root # execute the kubeadm command kubeadm join 10.0.0.4:6443 --token qwc4jt.h8u7m63ulcxkwnel --discovery-token-ca-cert-hash sha256:e33d7f81aed91ef221cac88b0b864d3c87f2e6cc0d521ac18cdc6096902c0941
-
Include the node-ip parameter in the kubeadm-flags.env file
# update the --nope-ip in the kubeadm-flags.env # Backup the original file cp /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/kubeadm-flags.env.backup # Update the file with the new node-ip argument read -r IPv6 IPv4 <<< $(hostname -i) && \ sed -i "s|KUBELET_KUBEADM_ARGS=\"\(.*\)\"|KUBELET_KUBEADM_ARGS=\"\1 --node-ip=$IPv4,$IPv6\"|" /var/lib/kubelet/kubeadm-flags.env # Restart kubelet to apply changes systemctl restart kubelet
-
Go back to the contro-plane shell and check the nodes of the cluster
Example output:
kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-cp NotReady control-plane 55m v1.30.7 k8s-wk1 NotReady 28m v1.30.7 k8s-wk2 NotReady 26m v1.30.7
The nodes are in a NotReady status because there is not CNI installed. Next, lets install Calico Enterprise, finish the cluster configuration and test it.
-
Install the Tigera operator and custom resource definitions.
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/tigera-operator.yaml
-
Download and configure the Tigera custom resources.
curl https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/custom-resources.yaml -O
-
Configure the ipPools in the custom resources.
apiVersion: operator.tigera.io/v1 kind: Installation metadata: name: default spec: # Configures Calico networking. calicoNetwork: # Note: The ipPools section cannot be modified post-install. ipPools: - blockSize: 26 cidr: 192.168.0.0/16 encapsulation: VXLAN natOutgoing: Enabled disableBGPExport: true nodeSelector: all() - blockSize: 122 cidr: fd00:db8:0:192::/64 encapsulation: None natOutgoing: Enabled disableBGPExport: true nodeSelector: all()
-
Apply the custom resource and monitor the Calico processes.
kubectl apply -f custom-resources.yaml # monitor the progress watch kubectl get tigerastatus
Example output:
Every 2.0s: kubectl get tigerastatus k8s-cp: Thu Jan 23 16:59:39 2025 NAME AVAILABLE PROGRESSING DEGRADED SINCE apiserver True False False 83s calico True False False 73s ippools True False False 2m13s
-
Verify that the nodes are all on Ready status.
kubectl get nodes
Here is how to validate a dual-stack enabled Kubernetes Cluster.
To verify the network configuration of a dual-stack Node:
-
Each Node should be allocated exactly:
- One IPv4 address block
- One IPv6 address block
-
To check these Pod address ranges, run this command, replacing the Node name with one from your cluster:
kubectl get nodes k8s-cp -o go-template --template='{{range .spec.podCIDRs}}{{printf "%s\n" .}}{{end}}'
Example output:
192.168.0.0/24 fd00:db8:0:192::/64
Note
The example uses k8s-cp as the Node name - replace this with an actual Node name from your cluster.
There should be one IPv4 block and one IPv6 block allocated.
To verify that a Node has both IPv4 and IPv6 interfaces:
-
Run this command, replacing the Node name with one from your cluster:
bash``` kubectl get nodes k8s-cp -o go-template --template='{{range .status.addresses}}{{printf "%s: %s\n" .type .address}}{{end}}'
-
The output should show:
- A hostname
- An IPv4 address (starts with numbers, contains dots)
- An IPv6 address (contains hexadecimal values and colons)
Example output:
InternalIP: 10.0.0.4 InternalIP: fd00:db8::4 Hostname: k8s-cp
Note
The example uses k8s-cpas the Node name - replace this with an actual Node name from your cluster.
First, let’s create a pod for testing
kubectl apply -f - <<-EOF
apiVersion: v1
kind: Pod
metadata:
labels:
app: pod1
name: pod1
spec:
containers:
- args:
- sleep
- "3600"
image: busybox
name: pod1
env:
- name: MY_POD_IPS
valueFrom:
fieldRef:
fieldPath: status.podIPs
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
EOF
There are three ways to verify that a Pod has both IPv4 and IPv6 addresses assigned:
Run this command, replacing pod01 with your Pod's name:
kubectl get pods pod1 -o go-template --template='{{range .status.podIPs}}{{printf "%s\n" .ip}}{{end}}'
Example output:
192.168.36.22 fd00:db8:0:192:7a8c:db8f:dd22:9915
You can expose Pod IPs as an environment variable within containers:
-
Add this to your Pod specification:
env: - name: MY_POD_IPS valueFrom: fieldRef: fieldPath: status.podIPs
-
View the environment variable inside the Pod:
kubectl exec -it pod1 -- env | grep MY_POD_IPS
Example output:
MY_POD_IPS=192.168.36.22,fd00:db8:0:192:7a8c:db8f:dd22:9915
The Pod's IP addresses are automatically written to /etc/hosts within the container. You can view them with:
kubectl exec -it pod1 -- cat /etc/hosts
Example output:
# Kubernetes-managed hosts file. 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet fe00::0 ip6-mcastprefix fe00::1 ip6-allnodes fe00::2 ip6-allrouters 192.168.36.22 pod1 fd00:db8:0:192:7a8c:db8f:dd22:9915 pod1
First let’s create a deployment to be used as endpoint for the services.
kubectl apply -f - <<-EOF
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
EOF
-
Create a basic Service without specifying ipFamilyPolicy:
kubectl apply -f - <<-EOF apiVersion: v1 kind: Service metadata: name: nginx-single-stack labels: name: nginx-single-stack spec: selector: app: nginx ports: - protocol: TCP port: 80 EOF
-
View the Service configuration:
kubectl get service nginx-single-stack -o yaml
Key points:
- When ipFamilyPolicy is not specified, Kubernetes will:
- Assign a cluster IP from the first configured service-cluster-ip-range
- Automatically set ipFamilyPolicy to SingleStack
Note
This will create a Service that operates in single-stack mode, even in a dual-stack cluster.
-
Create a Service that specifically requests IPv6:
kubectl apply -f - <<-EOF apiVersion: v1 kind: Service metadata: name: nginx-single-stack-ipv6 labels: name: nginx-single-stack-ipv6 spec: ipFamilies: - IPv6 selector: app: nginx ports: - protocol: TCP port: 80 EOF
-
View the Service configuration:
kubectl get svc nginx-single-stack-ipv6 -o yaml
Expected configuration:
- spec.ipFamilyPolicy will be set to SingleStack
- spec.clusterIP will be an IPv6 address from the range specified in the kube-controller-manager's --service-cluster-ip-range flag
Note
This creates a Service that exclusively uses IPv6, even in a dual-stack cluster.
-
Create a Service with PreferDualStack policy:
kubectl apply -f - <<-EOF apiVersion: v1 kind: Service metadata: name: nginx-prefer-ds labels: name: nginx-prefer-ds spec: ipFamilyPolicy: PreferDualStack selector: app: nginx ports: - protocol: TCP port: 80 EOF
-
View basic Service information:
kubectl get svc -l name=nginx-prefer-ds
Note
This only shows the primary IP address:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-prefer-ds ClusterIP 10.97.189.159 80/TCP 8s
-
View complete Service details including both IPv4 and IPv6 addresses:
kubectl describe svc -l name=nginx-prefer-ds
Example output:
Name: nginx-prefer-ds Namespace: default Labels: name=nginx-prefer-ds Annotations: Selector: app=nginx Type: ClusterIP IP Family Policy: PreferDualStack IP Families: IPv4,IPv6 IP: 10.97.189.159 IPs: 10.97.189.159,fd00:db8:10:96::6a58 Port: 80/TCP TargetPort: 80/TCP Endpoints: 192.168.36.23:80,192.168.36.24:80,192.168.74.141:80 Session Affinity: None Events:
Key points:
- The Service will receive both IPv4 and IPv6 addresses
- The primary ClusterIP is selected based on the first address family in ipFamilies
- You can verify Service accessibility using either IP address
Example output:
root@k8s-cp:~# kubectl exec pod1 -it -- wget --spider 10.97.189.159 Connecting to 10.97.189.159 (10.97.189.159:80) remote file exists root@k8s-cp:~# kubectl exec pod1 -it -- wget --spider [fd00:db8:10:96::6a58] Connecting to [fd00:db8:10:96::6a58] ([fd00:db8:10:96::6a58]:80) remote file exists
-
Create a Service that prefers IPv6:
kubectl apply -f - <<-EOF apiVersion: v1 kind: Service metadata: name: nginx-prefer-ds-ipv6 labels: name: nginx-prefer-ds-ipv6 spec: ipFamilyPolicy: PreferDualStack ipFamilies: - IPv6 selector: app: nginx ports: - protocol: TCP port: 80 EOF
-
View the Service status:
kubectl get svc -l name=nginx-prefer-ds-ipv6
Example output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-prefer-ds-ipv6 ClusterIP fd00:db8:10:96::95b9 80/TCP 5s
-
View complete Service details including both IPv4 and IPv6 addresses:
kubectl describe svc -l name=nginx-prefer-ds-ipv6
Example output:
Name: nginx-prefer-ds-ipv6 Namespace: default Labels: name=nginx-prefer-ds-ipv6 Annotations: Selector: app=nginx Type: ClusterIP IP Family Policy: PreferDualStack IP Families: IPv6,IPv4 IP: fd00:db8:10:96::95b9 IPs: fd00:db8:10:96::95b9,10.108.111.6 Port: 80/TCP TargetPort: 80/TCP Endpoints: [fd00:db8:0:192:3770:93dc:d249:4c]:80, [fd00:db8:0:192:7a8c:db8f:dd22:9916]:80,[fd00:db8:0:192:7a8c:db8f:dd22:9917] :80 Session Affinity: None
Key points:
- Your cloud provider must support IPv6 load balancers
- The Service will receive:
- An IPv6 cluster IP (shown as CLUSTER-IP)
That’s all, folks! I hope this helps someone 😁.