As we deployed more Azure Arc-enabled SQL Managed Instances, to meet demand, we had to scale out the aks-lab-workloads-1 cluster worker nodes. Azure Monitor Container Insights helped us determine the opportune time to scale out. Depending on the performance and capacity metrics reported by Azure Monitor, we chose to scale out the control-plane nodes and the worker nodes. For testing, to scale out only the aks-lab-workloads-1-linux node pool from two to four worker nodes, we used PowerShell commands run on one of the Azure Stack HCI cluster nodes.
To see the existing worker nodes, we ran the following command:
PS D:\Source\Lab> kubectl get nodes
NAME STATUS ROLES AGE VERSION
moc-lcp9ks30ktb Ready <none> 12d v1.21.2
moc-ljhfmw3lpwa Ready <none> 12d v1.21.2
moc-lwdgzty53ri Ready control-plane,master 19d v1.21.2
The existing aks-lab-workloads-1-linux nodes, moc-lcp9ks30ktb and moc-ljhfmw3lpwa, were running on AXNode02 and AXNode04. The 10 SQL MIs we had deployed for the performance testing, sqlmi-01 – sqlmi-10, were evenly distributed across these two nodes.
Then, we ran the PowerShell command on AXNode01 to expand the node pool:
PS D:\Source\Lab> Set-AksHciNodePool -ClusterName aks-lab-workloads-1 -Name aks-lab-workloads-1-linux -Count 4
We ran kubectl again to see the new nodes:
PS D:\Source\Lab> kubectl get nodes
NAME STATUS ROLES AGE VERSION
moc-l0v18yh90cy Ready <none> 30m v1.21.2
moc-lcp9ks30ktb Ready <none> 12d v1.21.2
moc-ljdqtqgo8kl Ready <none> 30m v1.21.2
moc-ljhfmw3lpwa Ready <none> 12d v1.21.2
moc-lwdgzty53ri Ready control-plane,master 19d v1.21.2
To evenly distribute the AKS-HCI load across the Azure Stack HCI cluster, the new worker nodes were automatically deployed onto AXNode01 and AXNode03. The scale-out procedure only took 2 minutes and 30 seconds to complete before all four nodes were online and ready for host new AKS-HCI resources. This time includes the installation of all new software agents on the nodes for monitoring and management. After the scale-up procedure finished, the existing SQL MIs running on the cluster did not get redistributed across all four nodes, as expected.
Then we created four SQL MIs (sqlmi-11 – sqlmi-14). They were automatically created and evenly distributed across the two new worker nodes, moc-l0v18yh90cy and moc-ljdqtqgo8kl.
kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName=moc-l0v18yh90cy
NAMESPACE NAME READY STATUS NODE
arc sqlmi-13-0 3/3 Running moc-l0v18yh90cy
arc sqlmi-14-0 3/3 Running moc-l0v18yh90cy
kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName=moc-ljdqtqgo8kl
NAMESPACE NAME READY STATUS NODE
arc sqlmi-11-0 3/3 Running moc-ljdqtqgo8kl
arc sqlmi-12-0 3/3 Running moc-ljdqtqgo8kl