Add subnet in AKS CNI mode - Icybiubiubiu/icyaks GitHub Wiki

Clusters configured with Azure CNI networking require additional planning. The size of your virtual network and its subnet must accommodate the number of pods you plan to run and the number of nodes for the cluster. IP addresses for the pods and the cluster's nodes are assigned from the specified subnet within the virtual network. The number of IP addresses required should include considerations for upgrade and scaling operations.

  1. When you upgrade your AKS cluster, a new node is deployed into the cluster. Services and workloads begin to run on the new node, and an older node is removed from the cluster. This rolling upgrade process requires a minimum of one additional block of IP addresses to be available. Your node count is then n + 1.
  2. When you scale an AKS cluster, a new node is deployed into the cluster. Services and workloads begin to run on the new node. Your IP address range needs to take into consideration how you may want to scale up the number of nodes and pods your cluster can support. One additional node for upgrade operations should also be included. Your node count is then n + number-of-additional-scaled-nodes-you-anticipate + 1.

Option, add the new address range in the VNET address range, if there is the VNET peering, it don’t support to add the new address space. 1.Add the new subnet for the AKS new node pool. image

2.Checked the subnet resource ID. #az network vnet subnet list --resource-group aadaks --vnet-name azure-k8stest-vnet

3.Create a node pool with a dedicated subnet, pass the subnet resource ID as an additional parameter when creating a node pool. #az aks nodepool add –resource-group aadaks –cluster-name icyaks –name mynodepool –node-count 1 –vnet-subnet-id /subscriptions/xxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/aadaks/providers/Microsoft.Network/virtualNetworks/azure-k8stest-vnet/subnets/AKSsubnet //the subnet is the output of last cmd image

4.Checked the new node pool information with AKS portal and kubectl get nodes #kubectl get nodes -o wide

5.Drain the pods to the new node pool //Prevent new pods from being scheduled onto node (cordon) #kubectl drain <NODE_NAME> --ignore-daemonsets --grace-period=600 // Drain - Gracefully terminate all pods on the node while marking the node as unschedulable: //If your daemonsets are non-critical pods such as monitoring agents then ignore-daemonsets #kubectl drain <NODE_NAME> --ignore-daemonsets --force

6.Restart the VM and uncordon node #az vm restart --resource-group MC_xxxxxxxxx -n xxxxxxx #kubectl uncordon <NODE_NAME>

  1. Continue to upgrade AKS(since now we have lease all the pod ips on the previous node)

https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni#configure-maximum---existing-clusters

Attention

from VNET level, all subnet in a VNET could communicate well but in AKS you may hit unexpected error when add new subnet into a VNET. It's always recommended to recreate new cluster with better CNI/ip address design.

Updated in Nov 2022.

This can be taken as an official method.

⚠️ **GitHub.com Fallback** ⚠️