Download the draw.io file of this schema
This repo contains a preconfigured Azure Kubernetes Service cluster embedded inside an hub-and-spoke network topology, aligned to the Azure enterprise-scale landing zone reference architecture, useful for testing and studying network configurations in a controlled, repeatable environment.
As bonus many scenarios with step-by-step solutions for studying and learning are also available.
The "playground" is composed by:
- a hub and spoke network topologies aligned with the Microsoft Enterprise scale landing zone reference architecture
- an AKS cluster deployed in one spoke
- routing table(s) and firewall policy configured so that all the AKS outbound traffic is routed through the firewall
You can use the following button to deploy the demo to your Azure subscription:
1 | the AKS playground deploys hub-lab-net spokes 01 -02 -03 an AKS Cluster and all the required routing |
This diagram shows a detailed version with also all subnets, virtual machines, NVAs, IPs and Firewalls.
Download the draw.io file of this schema.
the ARM template hub-spoke-aks.json deploys:
- 4 Azure Virtual Networks:
hub-lab-net
with 4 subnets:- an empty
default
subnet - AzureFirewallSubet: a subnet is used by Azure Firewall
- AzureBastionSubnet: a subnet is used by Azure Bastion
- GatewaySubnet: a subnet ready to deploy an by Azure Virtual Network Gateway
- an empty
spoke-01
with 2 subnetsdefault
andservices
spoke-02
with 2 subnetsdefault
andservices
spoke-03
with 2 subnetsdefault
andservices
lab-firewall
: an Azure Firewall premium on thehub-lab-net
networkmy-firewall-policy
: a sample policy that implements the any-to-any routing between spokes and all the internet traffic outboundall-to-firewall-we
andall-to-firewall-ne
: route tables that forward all the outbound traffic through the central Azure Firewallhub-playground-ws
: a log analytics workspace where to collect all firewall logsaks-01
: an Azure Kubernetes Service cluster deployed onservices
spoke-01
subnet
aks-01
cluster has 1 node pool and 2 sample workload deployed: azure vote front
and azure vote back
taken from the Microsoft Artifact Registry: a super-simple front-end/back-end application that exposes a sample UI over HTTP.
To test the workload, you need to know the IP of the front-end load balancer.
_Because we're using Azure CNI, you could also have used the pod IP. But keep in mind that pods are volatile, so in a Kubernetes context it is always advisable to use the load balancer IP. _
You can find this IP in:
- Azure Portal >
aks-01
> Services and ingresses >azure-vote-front
> Services >azure-vote-front
> External IP (something like 10.13.1.y)
To test it: access to hub-vm-01
in RDP/bastion and open in Edge http://x.x.x.x
(where x.x.x.x
is the IP found above)
Here there is a list of tested scenarios usable on this playground.
For each scenario you have:
- prerequisites: component to deploy required to implement the solution (only the hub, also one on-prem playground or both)
- solution: a step-by-step sequence to implement the solution
- test solution: a procedure to follow, to verify if the scenario is working as expected
scenario description | solution | |
---|---|---|
1 | Deploy a confidential computing nodes pool | see the documentation |
2 | Expose a workload from AKS with Azure Front Door | see the documentation |
3 | Expose a workload from AKS with Azure Firewall | see the documentation |
4 | Deploy Container Insights on AKS Cluster | see the documentation |