For Test-3, as shown in the following figure, we ran another group of four SQL MIs. For this test, we increased the vCPU request by two and lowered the memory request by 2 Gi on each instance. We observed that while the results looked impressive, we did not achieve the same TPM metrics that we did with Test-2, especially with the higher virtual user count of 20. We guessed that this metric might be the result of fewer SQL pages in memory.
Each SQL MI that we deployed was configured with a requested core count that was less than the total cores on the worker node. We observed that AKS-HCI limited the CPU percentage to the ratio of the core limit. In other words, Kubernetes spread the CPU load across all available worker node cores but at a percentage within the SQL MI requested configuration. There may also have been Kubernetes internals that caused an imbalance when attempting to balance the load across worker nodes. More research is required for Kubernetes internals used with a SQL Server workload.
For SQL MI specifications, see Test groups defined.