Home > Integrated Products > Microsoft HCI Solutions from Dell Technologies > White Papers > Building a Hybrid Database-as-a-Service Platform with Azure Stack HCI > Scale out the AKS hybrid workload cluster
After we added a fourth node to our Azure Stack HCI cluster, we deployed a fourth worker node to the dbaas-databases-1 workload cluster. In the event of an AKS hybrid worker node VM failure, this additional capacity would be available to host re-created SQL MIs. To scale out the node pool from three to four worker nodes, we ran PowerShell commands on one of the Azure Stack HCI cluster nodes.
To see the existing worker nodes, we ran the following:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
moc-l4sfoyqcv4i Ready <none> 42d v1.24.6
moc-la653mlicxn Ready <none> 42d v1.24.6
moc-lmsv25r92xk Ready control-plane,master 42d v1.24.6
moc-lwevgtrleok Ready <none> 42d v1.24.6
Then, we ran the PowerShell command on a7525r06c01n01 to expand the node pool:
Set-AksHciNodePool -ClusterName aks-lab-workloads-1 -Name aks-lab-workloads-1-linux -Count 4
We ran kubectl again to see the new node.
kubectl get nodes
NAME STATUS ROLES AGE VERSION
moc-l4sfoyqcv4i Ready <none> 42d v1.24.6
moc-la653mlicxn Ready <none> 42d v1.24.6
moc-lmsv25r92xk Ready control-plane,master 42d v1.24.6
moc-lpnn5gld7fy Ready <none> 87s v1.24.6
moc-lwevgtrleok Ready <none> 42d v1.24.6
We assumed that the newly created moc-lpnn5gld7fy virtual machine would be automatically placed on the new physical node, a7525r06c01n04. However, it was placed on a7525r06c01n02. This was likely due to the low resource utilization on the system at that time. To ensure even distribution of SQL MIs across all cluster resources, we Live Migrated this VM onto the fourth node so that each of the four dbaas-databases-1 worker nodes were evenly distributed across the Azure Stack HCI cluster.
The scale-out procedure only took 2 minutes and 30 seconds to complete before all four nodes were online and ready for host SQL MIs. This time included the installation of all new software agents on the nodes for monitoring and management. After the scale-out procedures finished, the existing SQL MI pods running on the cluster did not get redistributed across all four nodes, as expected.