Skip to content

How to setup a dualstack vanila Kubernetes cluster using virtual machine on Azure.

Notifications You must be signed in to change notification settings

regisftm/azure-vm-k8s-dualstack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 

Repository files navigation

How to: Setup a Dualstack Kubernetes Cluster

Modern Kubernetes clusters can operate with both IPv4 and IPv6 networking simultaneously, a feature known as DualStack networking. This dual-protocol approach offers several advantages:

  1. It allows pods and services to communicate using either IPv4 or IPv6 addresses.

  2. It helps organizations transition gradually from IPv4 to IPv6.

  3. It enables native communication with both legacy IPv4 systems and modern IPv6 infrastructure.

Starting with Kubernetes 1.21, DualStack networking is enabled by default, making it easier to deploy applications that can handle both IPv4 and IPv6 traffic. This guide will walk you through setting up a DualStack-enabled Kubernetes cluster from scratch.

Key DualStack Features

A DualStack-enabled Kubernetes cluster provides:

  1. Dual IP Pod Networking: Each Pod receives both an IPv4 and IPv6 address, enabling communication over either protocol.

  2. Protocol-Flexible Services: Kubernetes Services can listen on both IPv4 and IPv6 addresses.

  3. Dual-Protocol External Access: Pods can reach external resources (including the internet) using either IPv4 or IPv6.

System Requirements

Before implementing DualStack, ensure you have:

  1. Kubernetes Version: 1.20 or newer (later versions recommended for improved stability)

  2. Infrastructure Support: Your infrastructure provider (cloud or on-premises) must support dual-protocol networking and be able to assign both IPv4 and IPv6 addresses to nodes

  3. Compatible CNI Plugin: Your Container Network Interface (CNI) plugin must explicitly support DualStack operations. Common options include:

  • Calico
  • Cilium
  • Weave Net
  • Flannel (with specific configurations)

Table of Contents

  1. Create Azure Virtual Machines for the Kubernetes Cluster

  2. Create the Kubernetes Cluster

  3. Install Calico Enterprise

  4. IPv6 configuration validation Validate node addressing Validate Pod IP Addresses

    1. Using kubectl
    2. Using the Downward API
    3. Using /etc/hosts

    Validating Services Single-stack service
    Single-stack IPv6 service
    Preferred Dual-Stack Service
    Dual-Stack Service with IPv6 Preference

Instructions

1. Create Azure Virtual Machines for the Kubernetes Cluster

  1. Let’s start by defining some variables:

    export RG=rmartins-dualstack
    export LOCATION=westus2
  2. Create an Azure Resource Group for all resources.

    az group create --name $RG --location $LOCATION
  3. Create an ssh-key to access the VMs in the resource group.

    az sshkey create --name ssh-key --resource-group $RG

    Note the key pair name generated:

    Example output:

    No public key is provided. A key pair is being generated for you.
    Private key is saved to "/Users/regis/.ssh/1735836400_1739929".
    Public key is saved to "/Users/regis/.ssh/1735836400_1739929.pub".
    Change the permissions for the generated key:
    

    Example command:

    chmod 400 ~/.ssh/1735836400_1739929
  4. Create the network resources for the VMs.

    # create a network security group for the kubernetes vms
    az network nsg create --resource-group $RG --name k8s-nsg
    # create nsg rules to allow ssh and nodeport traffic
    az network nsg rule create \
      --resource-group $RG \
      --nsg-name k8s-nsg \
      --name AllowSSH \
      --priority 1000 \
      --access Allow \
      --protocol Tcp \
      --direction Inbound \
      --source-address-prefixes '*' \
      --source-port-ranges '*' \
      --destination-address-prefixes '*' \
      --destination-port-ranges 22   
    az network nsg rule create \
      --resource-group $RG \
      --nsg-name k8s-nsg \
      --name AllowCustomPorts \
      --priority 1100 \
      --access Allow \
      --protocol Tcp \
      --direction Inbound \
      --source-address-prefixes '*' \
      --source-port-ranges '*' \
      --destination-address-prefixes '*' \
      --destination-port-ranges 30000-32747
    # create a vnet and subnets for the kubernetes cluster with IPv6 and IPv4    addresses
    az network vnet create \
      --resource-group $RG \
      --location $LOCATION \
      --name k8s-vnet \
      --address-prefixes 10.0.0.0/16 fd00:db8:0::/48 \
      --subnet-name k8s-subnet \
      --subnet-prefixes 10.0.0.0/24 fd00:db8:0:0::/64 \
      --network-security-group k8s-nsg
  5. Create network IP addresses for each node - IPv4 and IPv6

    # k8s-cp
    az network public-ip create \
      --resource-group $RG \
      --name k8s-cp-PublicIP-Ipv4 \
      --sku Standard \
      --version IPv4 \
      --zone 1 2 3
    az network public-ip create \
      --resource-group $RG \
      --name k8s-cp-PublicIP-Ipv6 \
      --sku Standard \
      --version IPv6 \
      --zone 1 2 3
    # k8s-wk1
    az network public-ip create \
      --resource-group $RG \
      --name k8s-wk1-PublicIP-Ipv4 \
      --sku Standard \
      --version IPv4 \
      --zone 1 2 3
    az network public-ip create \
      --resource-group $RG \
      --name k8s-wk1-PublicIP-Ipv6 \
      --sku Standard \
      --version IPv6 \
      --zone 1 2 3   
    # k8s-wk2
    az network public-ip create \
      --resource-group $RG \
      --name k8s-wk2-PublicIP-Ipv4 \
      --sku Standard \
      --version IPv4 \
      --zone 1 2 3
    az network public-ip create \
      --resource-group $RG \
      --name k8s-wk2-PublicIP-Ipv6 \
      --sku Standard \
      --version IPv6 \
      --zone 1 2 3   
  6. Create network interface for each node, attaching the IP addresses created.

    # k8s-cp
    az network nic create \
      --resource-group $RG \
      --name k8s-cp-NIC \
      --vnet-name k8s-vnet \
      --subnet k8s-subnet \
      --network-security-group k8s-nsg \
      --ip-forwarding true \
      --public-ip-address k8s-cp-PublicIP-Ipv4
    az network nic ip-config create \
      --resource-group $RG \
      --name k8s-cp-IPv6config \
      --nic-name k8s-cp-NIC \
      --private-ip-address-version IPv6 \
      --vnet-name k8s-vnet \
      --subnet k8s-subnet \
      --public-ip-address k8s-cp-PublicIP-Ipv6
    # k8s-wk1
    az network nic create \
      --resource-group $RG \
      --name k8s-wk1-NIC \
      --vnet-name k8s-vnet \
      --subnet k8s-subnet \
      --network-security-group k8s-nsg \
      --ip-forwarding true \
      --public-ip-address k8s-wk1-PublicIP-Ipv4
    az network nic ip-config create \
      --resource-group $RG \
      --name k8s-wk1-IPv6config \
      --nic-name k8s-wk1-NIC \
      --private-ip-address-version IPv6 \
      --vnet-name k8s-vnet \
      --subnet k8s-subnet \
      --public-ip-address k8s-wk1-PublicIP-Ipv6
    # k8s-wk2
    az network nic create \
      --resource-group $RG \
      --name k8s-wk2-NIC \
      --vnet-name k8s-vnet \
      --subnet k8s-subnet \
      --network-security-group k8s-nsg \
      --ip-forwarding true \
      --public-ip-address k8s-wk2-PublicIP-Ipv4
    az network nic ip-config create \
      --resource-group $RG \
      --name k8s-wk2-IPv6config \
      --nic-name k8s-wk2-NIC \
      --private-ip-address-version IPv6 \
      --vnet-name k8s-vnet \
      --subnet k8s-subnet \
      --public-ip-address k8s-wk2-PublicIP-Ipv6
  7. Finally, create the virtual machines for each node.

    Use the custom data for each node type: control-plane (cp.sh) and worker (wk.sh)

    One requirements to setup the virtual machines right for a dual-stack kubernetes installation is to enable IPv4 AND IPV6 packet forwarding:

    cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
    net.ipv4.conf.all.forwarding = 1
    net.ipv6.conf.all.forwarding = 1
    EOF
    # Apply sysctl params without reboot
    sudo sysctl --system

    It is covered in the custom-data scripts.

    From the folder where you saved the custom data scripts (wk.sh and cp.sh) run the following commands to create the VMs.

    # k8s-cp
    az vm create \
      --resource-group $RG \
      --name k8s-cp \
      --nics k8s-cp-NIC \
      --image Ubuntu2404 \
      --size Standard_B2ms \
      --admin-username azureuser \
      --ssh-key-name ssh-key \
      --os-disk-size-gb 30 \
      --storage-sku Premium_LRS \
      --custom-data ./cp.sh
    # k8s-wk1
    az vm create \
      --resource-group $RG \
      --name k8s-wk1 \
      --nics k8s-wk1-NIC \
      --image Ubuntu2404 \
      --size Standard_B2ms \
      --admin-username azureuser \
      --ssh-key-name ssh-key \
      --os-disk-size-gb 30 \
      --storage-sku Premium_LRS \
      --custom-data ./wk.sh
    # k8s-wk2
    az vm create \
      --resource-group $RG \
      --name k8s-wk2 \
      --nics k8s-wk2-NIC \
      --image Ubuntu2404 \
      --size Standard_B2ms \
      --admin-username azureuser \
      --ssh-key-name ssh-key \
      --os-disk-size-gb 30 \
      --storage-sku Premium_LRS \
      --custom-data ./wk.sh

2. Create the Kubernetes Cluster

After creating the virtual machines, save the public IP addresses for logging into them and finish the Kubernetes cluster configuration.

Example output:

{
  "fqdns": "",
  "id": "/subscriptions/03cfb895-358d-4ad4-8aba-aeede8dbfc30/resourceGroups/rmartins-dualstack/providers/Microsoft.Compute/virtualMachines/k8s-cp",
  "location": "westus2",
  "macAddress": "00-22-48-BF-82-DB",
  "powerState": "VM running",
  "privateIpAddress": "10.0.0.4,fd00:db8::4",
  "publicIpAddress": "4.149.139.34,2603:1030:c04:e::b",
  "resourceGroup": "rmartins-dualstack",
  "zones": ""
}
{
  "fqdns": "",
  "id": "/subscriptions/03cfb895-358d-4ad4-8aba-aeede8dbfc30/resourceGroups/rmartins-dualstack/providers/Microsoft.Compute/virtualMachines/k8s-wk1",
  "location": "westus2",
  "macAddress": "00-22-48-C0-CD-C4",
  "powerState": "VM running",
  "privateIpAddress": "10.0.0.5,fd00:db8::5",
  "publicIpAddress": "4.149.139.68,2603:1030:c04:e::e",
  "resourceGroup": "rmartins-dualstack",
  "zones": ""
}
{
  "fqdns": "",
  "id": "/subscriptions/03cfb895-358d-4ad4-8aba-aeede8dbfc30/resourceGroups/rmartins-dualstack/providers/Microsoft.Compute/virtualMachines/k8s-wk2",
  "location": "westus2",
  "macAddress": "00-0D-3A-FC-77-CD",
  "powerState": "VM running",
  "privateIpAddress": "10.0.0.6,fd00:db8::6",
  "publicIpAddress": "4.149.139.82,2603:1030:c04:e::12",
  "resourceGroup": "rmartins-dualstack",
  "zones": ""
}
  1. Log into the control plane first using the public ip address and the key previously generated.

    ssh -i ~/.ssh/1735836400_1739929 azureuser@4.149.139.34

    Escalate your privilege to root user.

    sudo su - root

    Retrieve the kubeadm join command from the /var/log/cloud-init-output.log

    grep -e 'kubeadm\|discovery' /var/log/cloud-init-output.log

    Copy and save the line with the token on it

    kubeadm join 10.0.0.4:6443 --token qwc4jt.h8u7m63ulcxkwnel --discovery-token-ca-cert-hash sha256:e33d7f81aed91ef221cac88b0b864d3c87f2e6cc0d521ac18cdc6096902c0941
  2. Log into each one of the worker nodes and run the kubeadm command to join them to the control-plane, creating the cluster.

    ssh -i ~/.ssh/1735836400_1739929 azureuser@4.149.139.68
    
    # escalate privilege to root user
    sudo su - root
    
    # execute the kubeadm command
    kubeadm join 10.0.0.4:6443 --token qwc4jt.h8u7m63ulcxkwnel    --discovery-token-ca-cert-hash    sha256:e33d7f81aed91ef221cac88b0b864d3c87f2e6cc0d521ac18cdc6096902c0941
  3. Include the node-ip parameter in the kubeadm-flags.env file

    # update the --nope-ip in the kubeadm-flags.env
    # Backup the original file
    cp /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/kubeadm-flags.env.backup
    
    # Update the file with the new node-ip argument
    read -r IPv6 IPv4 <<< $(hostname -i) && \
    sed -i "s|KUBELET_KUBEADM_ARGS=\"\(.*\)\"|KUBELET_KUBEADM_ARGS=\"\1    --node-ip=$IPv4,$IPv6\"|" /var/lib/kubelet/kubeadm-flags.env
    
    # Restart kubelet to apply changes
    systemctl restart kubelet
    
  4. Go back to the contro-plane shell and check the nodes of the cluster

    Example output:

    kubectl get nodes
    NAME      STATUS     ROLES           AGE   VERSION
    k8s-cp    NotReady   control-plane   55m   v1.30.7
    k8s-wk1   NotReady             28m   v1.30.7
    k8s-wk2   NotReady             26m   v1.30.7
    

    The nodes are in a NotReady status because there is not CNI installed. Next, lets install Calico Enterprise, finish the cluster configuration and test it.

3. Install Calico

  1. Install the Tigera operator and custom resource definitions.

    kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/tigera-operator.yaml
  2. Download and configure the Tigera custom resources.

    curl https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/custom-resources.yaml -O
  3. Configure the ipPools in the custom resources.

    apiVersion: operator.tigera.io/v1
    kind: Installation
    metadata:
      name: default
    spec:
       # Configures Calico networking.
      calicoNetwork:
         # Note: The ipPools section cannot be modified post-install.
        ipPools:
        - blockSize: 26
          cidr: 192.168.0.0/16
          encapsulation: VXLAN
          natOutgoing: Enabled
          disableBGPExport: true
          nodeSelector: all()
        - blockSize: 122
          cidr: fd00:db8:0:192::/64
          encapsulation: None
          natOutgoing: Enabled
          disableBGPExport: true
          nodeSelector: all()
  4. Apply the custom resource and monitor the Calico processes.

    kubectl apply -f custom-resources.yaml
    
    # monitor the progress
    watch kubectl get tigerastatus

    Example output:

    Every 2.0s: kubectl get tigerastatus                                        k8s-cp: Thu Jan 23 16:59:39 2025
    
    NAME        AVAILABLE   PROGRESSING   DEGRADED   SINCE
    apiserver   True        False         False      83s
    calico      True        False         False      73s
    ippools     True        False         False      2m13s
    
  5. Verify that the nodes are all on Ready status.

    kubectl get nodes

4. IPv6 configuration validation

Here is how to validate a dual-stack enabled Kubernetes Cluster.

Validate node addressing

To verify the network configuration of a dual-stack Node:

  1. Each Node should be allocated exactly:

    • One IPv4 address block
    • One IPv6 address block
  2. To check these Pod address ranges, run this command, replacing the Node name with one from your cluster:

    kubectl get nodes k8s-cp -o go-template --template='{{range .spec.podCIDRs}}{{printf "%s\n" .}}{{end}}'

    Example output:

    192.168.0.0/24
    fd00:db8:0:192::/64
    

Note

The example uses k8s-cp as the Node name - replace this with an actual Node name from your cluster.

There should be one IPv4 block and one IPv6 block allocated.

To verify that a Node has both IPv4 and IPv6 interfaces:

  1. Run this command, replacing the Node name with one from your cluster:

    bash``` kubectl get nodes k8s-cp -o go-template --template='{{range .status.addresses}}{{printf "%s: %s\n" .type .address}}{{end}}'

    
    
  2. The output should show:

    • A hostname
    • An IPv4 address (starts with numbers, contains dots)
    • An IPv6 address (contains hexadecimal values and colons)

    Example output:

    InternalIP: 10.0.0.4
    InternalIP: fd00:db8::4
    Hostname: k8s-cp
    

Note

The example uses k8s-cpas the Node name - replace this with an actual Node name from your cluster.

Validate Pod IP Addresses

First, let’s create a pod for testing

kubectl apply -f - <<-EOF
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: pod1
  name: pod1
spec:
  containers:
  - args:
    - sleep
    - "3600"
    image: busybox
    name: pod1
    env:
    - name: MY_POD_IPS
      valueFrom:
        fieldRef:
          fieldPath: status.podIPs
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
EOF

There are three ways to verify that a Pod has both IPv4 and IPv6 addresses assigned:

1. Using kubectl

Run this command, replacing pod01 with your Pod's name:

kubectl get pods pod1 -o go-template --template='{{range .status.podIPs}}{{printf "%s\n" .ip}}{{end}}'

Example output:

192.168.36.22
fd00:db8:0:192:7a8c:db8f:dd22:9915

2. Using the Downward API

You can expose Pod IPs as an environment variable within containers:

  1. Add this to your Pod specification:

    env:
    - name: MY_POD_IPS
      valueFrom:
        fieldRef:
          fieldPath: status.podIPs
  2. View the environment variable inside the Pod:

    kubectl exec -it pod1 -- env | grep MY_POD_IPS

    Example output:

    MY_POD_IPS=192.168.36.22,fd00:db8:0:192:7a8c:db8f:dd22:9915
    

3. Using /etc/hosts

The Pod's IP addresses are automatically written to /etc/hosts within the container. You can view them with:

kubectl exec -it pod1 -- cat /etc/hosts

Example output:

# Kubernetes-managed hosts file.
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
fe00::0	ip6-mcastprefix
fe00::1	ip6-allnodes
fe00::2	ip6-allrouters
192.168.36.22	pod1
fd00:db8:0:192:7a8c:db8f:dd22:9915	pod1

Validating Services

First let’s create a deployment to be used as endpoint for the services.

kubectl apply -f - <<-EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
EOF

Single-stack service

  1. Create a basic Service without specifying ipFamilyPolicy:

    kubectl apply -f - <<-EOF
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-single-stack
      labels:
        name: nginx-single-stack
    spec:
      selector:
        app: nginx
      ports:
        - protocol: TCP
          port: 80
    EOF
  2. View the Service configuration:

    kubectl get service nginx-single-stack -o yaml

Key points:

  • When ipFamilyPolicy is not specified, Kubernetes will:
    • Assign a cluster IP from the first configured service-cluster-ip-range
    • Automatically set ipFamilyPolicy to SingleStack

Note

This will create a Service that operates in single-stack mode, even in a dual-stack cluster.

Single-stack IPv6 service

  1. Create a Service that specifically requests IPv6:

    kubectl apply -f - <<-EOF
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-single-stack-ipv6
      labels:
        name: nginx-single-stack-ipv6
    spec:
      ipFamilies:
      - IPv6
      selector:
        app: nginx
      ports:
        - protocol: TCP
          port: 80
    EOF
  2. View the Service configuration:

    kubectl get svc nginx-single-stack-ipv6 -o yaml

Expected configuration:

  • spec.ipFamilyPolicy will be set to SingleStack
  • spec.clusterIP will be an IPv6 address from the range specified in the kube-controller-manager's --service-cluster-ip-range flag

Note

This creates a Service that exclusively uses IPv6, even in a dual-stack cluster.

Preferred Dual-Stack Service

  1. Create a Service with PreferDualStack policy:

    kubectl apply -f - <<-EOF
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-prefer-ds
      labels:
        name: nginx-prefer-ds
    spec:
      ipFamilyPolicy: PreferDualStack
      selector:
        app: nginx
      ports:
        - protocol: TCP
          port: 80
    EOF
  2. View basic Service information:

    kubectl get svc -l name=nginx-prefer-ds

Note

This only shows the primary IP address:

NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
nginx-prefer-ds   ClusterIP   10.97.189.159           80/TCP    8s
  1. View complete Service details including both IPv4 and IPv6 addresses:

    kubectl describe svc -l name=nginx-prefer-ds

    Example output:

    Name:              nginx-prefer-ds
    Namespace:         default
    Labels:            name=nginx-prefer-ds
    Annotations:       
    Selector:          app=nginx
    Type:              ClusterIP
    IP Family Policy:  PreferDualStack
    IP Families:       IPv4,IPv6
    IP:                10.97.189.159
    IPs:               10.97.189.159,fd00:db8:10:96::6a58
    Port:                80/TCP
    TargetPort:        80/TCP
    Endpoints:         192.168.36.23:80,192.168.36.24:80,192.168.74.141:80
    Session Affinity:  None
    Events:            
    

Key points:

  • The Service will receive both IPv4 and IPv6 addresses
  • The primary ClusterIP is selected based on the first address family in ipFamilies
  • You can verify Service accessibility using either IP address

Example output:

root@k8s-cp:~# kubectl exec pod1 -it -- wget --spider 10.97.189.159
Connecting to 10.97.189.159 (10.97.189.159:80)
remote file exists
root@k8s-cp:~# kubectl exec pod1 -it -- wget --spider [fd00:db8:10:96::6a58]
Connecting to [fd00:db8:10:96::6a58] ([fd00:db8:10:96::6a58]:80)
remote file exists

Dual-Stack Service with IPv6 Preference

  1. Create a Service that prefers IPv6:

    kubectl apply -f - <<-EOF
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-prefer-ds-ipv6
      labels:
        name: nginx-prefer-ds-ipv6
    spec:
      ipFamilyPolicy: PreferDualStack
      ipFamilies:
      - IPv6
      selector:
        app: nginx
      ports:
        - protocol: TCP
          port: 80
    EOF
  2. View the Service status:

    kubectl get svc -l name=nginx-prefer-ds-ipv6

    Example output:

    NAME                   TYPE        CLUSTER-IP             EXTERNAL-IP   PORT(S)   AGE
    nginx-prefer-ds-ipv6   ClusterIP   fd00:db8:10:96::95b9           80/TCP    5s
    
  3. View complete Service details including both IPv4 and IPv6 addresses:

    kubectl describe svc -l name=nginx-prefer-ds-ipv6

    Example output:

    Name:              nginx-prefer-ds-ipv6
    Namespace:         default
    Labels:            name=nginx-prefer-ds-ipv6
    Annotations:       
    Selector:          app=nginx
    Type:              ClusterIP
    IP Family Policy:  PreferDualStack
    IP Families:       IPv6,IPv4
    IP:                fd00:db8:10:96::95b9
    IPs:               fd00:db8:10:96::95b9,10.108.111.6
    Port:                80/TCP
    TargetPort:        80/TCP
    Endpoints:         [fd00:db8:0:192:3770:93dc:d249:4c]:80,   [fd00:db8:0:192:7a8c:db8f:dd22:9916]:80,[fd00:db8:0:192:7a8c:db8f:dd22:9917]   :80
    Session Affinity:  None
    

Key points:

  • Your cloud provider must support IPv6 load balancers
  • The Service will receive:
    • An IPv6 cluster IP (shown as CLUSTER-IP)

That’s all, folks! I hope this helps someone 😁.

About

How to setup a dualstack vanila Kubernetes cluster using virtual machine on Azure.

Topics

Resources

Stars

Watchers

Forks

Languages