Home > Storage > PowerStore > Virtualization and Cloud > VMware Horizon 8 with Dell EMC PowerEdge R6525 Servers and Dell EMC PowerStore 9000 > Calculate VDI VM density
For this testing, each PowerEdge server is configured with 1024 GB of memory and dual AMD EPYC 32-core processors. With 11 servers in the cluster, this configuration provides an aggregate of 11,264 GB of total memory and 704 CPU cores. Each host is also configured with dual 25 GbE NICs for file and iSCSI connectivity.
A VMware Horizon 8 pool of 2,500 virtual desktops running Windows 10 21H1 is configured to use 2 vCPUs and 4 GB of memory each. The VM density that is supported with this configuration is about 227 VMs for each host, with host memory capacity at about 88% used.
Note: If the total VDI VM memory demand exceeds the available memory on a host, caching to disk is used. Caching can negatively impact the VDI user experience, so oversubscribing server memory should be avoided.
R6525 host |
Host memory (GB) |
VDI VM density |
Host memory consumed by VDI VMs |
Host memory % consumed by VMs |
1 |
1024 |
227 |
908 |
88% |
2 |
1024 |
227 |
908 |
88% |
3 |
1024 |
227 |
908 |
88% |
4 |
1024 |
227 |
908 |
88% |
5 |
1024 |
227 |
908 |
88% |
6 |
1024 |
227 |
908 |
88% |
7 |
1024 |
227 |
908 |
88% |
8 |
1024 |
227 |
908 |
88% |
9 |
1024 |
228 |
912 |
89% |
10 |
1024 |
228 |
912 |
89% |
11 |
1024 |
228 |
912 |
89% |
Total |
11,264 |
2,500 |
10,000 |
89% |
Ensure that there is adequate memory in the remaining host servers in the cluster so a host can be taken offline for maintenance without impacting the workload.
In this example, 2,500 VMs in the pool are powered on and a host is put in maintenance mode. After the VMs are migrated from the host subject to maintenance, the consumed memory on the remaining hosts increases to 98%. If more host memory head room is wanted for maintenance scenarios in your environment, consider these actions:
This scenario assumes that maintenance activities are performed outside the business day so they do not impact VDI performance.
R6525 Host |
Host memory (GB) |
VDI VM density |
Host memory consumed by VDI VMs |
Host memory % consumed by VMs |
1 |
Maintenance mode (all VDI VMs moved to other hosts) |
|||
2 |
1024 |
250 |
1,000 |
98% |
3 |
1024 |
250 |
1,000 |
98% |
4 |
1024 |
250 |
1,000 |
98% |
5 |
1024 |
250 |
1,000 |
98% |
6 |
1024 |
250 |
1,000 |
98% |
7 |
1024 |
250 |
1,000 |
98% |
8 |
1024 |
250 |
1,000 |
98% |
9 |
1024 |
250 |
1,000 |
98% |
10 |
1024 |
250 |
1,000 |
98% |
11 |
1024 |
250 |
1,000 |
98% |
Total |
10,244 |
2,500 |
10,000 |
98% |
Each virtual desktop in the pool is configured to use 2 vCPUs; therefore, the total vCPU count for 2,500 VMs is 5,000. The result is 7.1 vCPUs for each server CPU core. Testing is recommended to determine the amount of host CPU oversubscription that is supported without a degradation in performance. Testing showed that this configuration performed well at scale for all phases of the VDI life cycle. There was no degradation in performance, as shown in the test results.
The network speed for this testing is 25 GbE end-to-end with all recommended best practices applied. 100 GbE trunking is configured between data switches. Testing demonstrated the configuration to be more than adequate for the VDI workload at scale.