April 8, 2019

Managed Azure Kubernetes Services (AKS) – Network Design with Azure CNI Plugin

Ann Carpenter

Note: Special thanks to Navisite leaders  John Rudenauer , Balaji Sundara and Mike Gallo for their continued support on this blog series.

Managed Kubernetes, a part of Navisite’s Azure Cloud Management Services, simplifies deployment, management and operations of Kubernetes, and allows developers to take advantage of Kubernetes without worrying about the underlying plumbing to get it up running and freeing up developer time to focus on the applications. Different Cloud Providers are offering this service – for example Google Kubernetes Engine (GKE), Amazon has Elastic Container Service for Kubernetes (EKS), Microsoft has Azure Container Service (AKS) etc..
This is Part 2 of the AKS network design. In Part 1 we looked at the AKS cluster deployed with Kubenet networking.  Kubenet networking is the default configuration with AKS cluster. With Kubernet, nodes get IPs from the Azure Virtual Network while pods receive IPs from a different address space.  With Azure CNI, every pod gets an IP address in the Azure Virtual network subnet and can directly access nodes in the virtual network. This blog walks you through a step-by-step process to create a public facing “Load Balancer” service type in AKS using the Azure CNI plugin. Once the sample application is deployed we will do a deep dive into networking and traffic flow.
More upcoming blogs (#4 and #5) in this series so stay tuned…

  1. Azure Kubernetes Services (AKS) – Kubenet Network Design (Part 1)
  2. Azure Container Registry
  3. Managed Azure Kubernetes Services (AKS) – Advanced Network Design with CNI (Part 2)
  4. Custom Kubernetes Cluster on IaaS VMs in Azure using Flannel Overlay
  5. AKS with Persistent volumes using Azure Disks and Azure Files

Reference Architecture

Azure Documentation

Azure provides great documentation, so be sure to check out the detailed documentation on networking concepts here.
From Azure documentation:

Source – Azure Documentation

Create an AKS Cluster using Exisiting VNET

For details check out the Azure documentation here.
With Azure Container Networking Interface (CNI), every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be unique across your network space, and must be planned in advance. Each node has a configuration parameter for the maximum number of pods that it supports. The equivalent number of IP addresses per node are then reserved up front for that node. This approach requires more planning, and often leads to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow.
Specify the --max-pods argument when you deploy a cluster with the az aks create command. The maximum value is 110.
Note: You can’t change the maximum number of pods per node when you deploy a cluster with the Azure portal. Azure CNI networking clusters are limited to 30 pods per node when you deploy using the Azure portal.

# login to azure
az login
# Create a resource group
az group create --name nn-aks-advanced-rg --location eastus
# create a vnet
az network vnet create \
    --resource-group nn-aks-advanced-rg \
    --name nn-aks-advanced-vnet \
    --address-prefixes \
    --subnet-name nn-aks-advanced-subnet \
# create service principal
az ad sp create-for-rbac --skip-assignment
nehali@nn-ubuntu-vm:~$ az ad sp create-for-rbac --skip-assignment
  "appId": "6558f841-f5c9-49a3-b297-deba41ba3a9f",
  "displayName": "azure-cli-2019-02-15-00-37-06",
  "name": "http://azure-cli-2019-02-15-00-37-06",
  "password": "2160bb91-ad40-4b58-9069-181723453652",
  "tenant": "93e5c8a2-2648-4ac4-b364-3dc67eb6c7bd"
# define variables
VNET_ID=$(az network vnet show --resource-group nn-aks-advanced-rg --name nn-aks-advanced-vnet --
query id -o tsv)
SUBNET_ID=$(az network vnet subnet show --resource-group nn-aks-advanced-rg --vnet-name nn-aks-ad
vanced-vnet --name nn-aks-advanced-subnet --query id -o tsv)
echo $VNET_ID
nehali@nn-ubuntu-vm:~$ echo $VNET_ID
nehali@nn-ubuntu-vm:~$ echo $SUBNET_ID
# assign contributor access to the vnet
az role assignment create --assignee "6558f841-f5c9-49a3-b297-deba41ba3a9f" --scope $VNET_ID --ro
le Contributor
# create the AKS Cluster. Notice the --max-pod setting and service CIDR setting
az aks create \
	--resource-group nn-aks-advanced-rg \
	--name nn-aks-advanced-cluster  \
	--node-count 3  --network-plugin azure \
    --service-cidr \
    --dns-service-ip \
    --docker-bridge-address \
    --vnet-subnet-id $SUBNET_ID \
    --service-principal "6558f841-f5c9-49a3-b297-deba41ba3a9f" \
    --client-secret "2160bb91-ad40-4b58-9069-181723453652" \
    --generate-ssh-keys \
    --node-vm-size Standard_DS1_v2 \
    --dns-name-prefix nnaksadvcluster \
    --max-pods 10

Run AKS Cluster Validations and Sample Application

Note the Node IPs
nehali@nn-ubuntu-vm:~$ kubectl get nodes -o wide
aks-nodepool1-16969019-0   Ready    agent   3d19h   v1.9.11   <none>        Ubuntu 16.04.5 LTS   4.15.0-1037-azure   docker://3.0.4
aks-nodepool1-16969019-1   Ready    agent   3d19h   v1.9.11   <none>        Ubuntu 16.04.5 LTS   4.15.0-1037-azure   docker://3.0.4
aks-nodepool1-16969019-2   Ready    agent   3d19h   v1.9.11    <none>        Ubuntu 16.04.5 LTS   4.15.0-1037-azure   docker://3.0.4
The deployment and Service Manifest files are the same are part-1
nehali@nn-ubuntu-vm:~$ kubectl create -f nn-deployment.yaml
deployment.apps/nn-nginx-deployment created
nehali@nn-ubuntu-vm:~$ kubectl create -f nn-service.yaml
service/nn-nginx-service created
Note the POD IPs
nehali@nn-ubuntu-vm:~$ kubectl get pods -o wide
NAME                                  READY   STATUS    RESTARTS   AGE   IP            NODE                       NOMINATED NODE   READINESS GATES
nn-nginx-deployment-77fcff4b8-5fjqb   1/1     Running   0          56s   aks-nodepool1-16969019-0   <none>           <none>
nn-nginx-deployment-77fcff4b8-lqdj4   1/1     Running   0          56s   aks-nodepool1-16969019-2   <none>           <none>
nn-nginx-deployment-77fcff4b8-zfdhx   1/1     Running   0          56s   aks-nodepool1-16969019-2   <none>           <none>
nehali@nn-ubuntu-vm:~$ kubectl get service -o wide
NAME               TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)        AGE     SELECTOR
kubernetes         ClusterIP     <none>          443/TCP        3d19h   <none>
nn-nginx-service   LoadBalancer   80:30371/TCP   2m6s    app=nn-nginx

Azure Side Screen Captures

Resource Groups

VNET and Subnets

Virtual Machines/Nodes

Node Interfaces

External Load Balancer


Finally, connect to the Azure load balancer front end IP.

Managed AKS – Easier container deployment and management

Managed AKS makes it easy to deploy and manage containerized applications without container orchestration expertise. However, making the right decision for the network design is critical because it affects the overall solution for remote access, internal and external load balancing.
Note: I’d like to thank my manager John Rudenauer and leaders from our Navisite Product Management – Balaji Sundara , my colleagues Umang Chhibber and Eric Corbett, Marketing team – Chris Pierdominici and Carole Bailey, and Professional Services team – Mike Gallo for their continued support and direction
Learn more about how Navisite’s Azure Cloud Management Services can help you more optimally deploy containers in AKS.  If your organization needs assistance in migrating to Azure, or managing an existing deployment, please contact us for additional information, or call us at (888) 298-8222 in the US, or 0800-6122933 in the UK.

About Ann Carpenter

Ann is the head of demand generation and blogger at Navisite with more than 10 years of experience working, writing and developing content for technology companies while living in the U.S. and in countries around the world. She currently lives in Atlanta, Ga., with her husband and newborn son.