This section describes best practices for sizing your VDI deployment.
Platform configurations
The Virtual Workstation configuration provides the highest levels of performance for more specialized VDI workloads, which means you can use it with ISV and high-end computing workloads.
CPU
Graphics considerations include:
- For high-end graphics configurations with NVIDIA vWS graphics enabled, choose higher clock speeds over higher core counts. Many applications that benefit from high-end graphics are engineered with single-threaded CPU components. Higher clock speeds benefit users more in these workloads.
- For NVIDIA vWS configurations, use higher core counts over faster clock speeds to reduce oversubscription.
- Most graphics configurations do not experience high CPU oversubscription because vGPU resources are likely to be the resource constraint in the appliance.
Memory
Best practices for memory allocation and configuration include:
- Do not overcommit memory when sizing because memory is often not the constraining resource. Overcommitting memory increases the possibility of performance degradation if contention for memory resources occurs, such as swapping and ballooning of memory. Overcommitted memory can also affect storage performance when swap files are created.
- Populate memory in units of eight DIMMs per CPU to yield the highest performance. Dell PowerEdge servers using 3rd Generation Intel Xeon scalable processors have eight memory channels per CPU, which are controlled by four internal memory controllers, each handling two memory channels. To ensure that your environment has the optimal memory configuration, use a balanced configuration, where each CPU supports a maximum of 16 DIMMs (or 32 DIMMs for a dual-CPU server). The most effective configuration is 16 DIMMs (8 per processor) with Intel Xeon Scalable processors.
- Use Intel Optane Persistent Memory (PMem) for cost savings over traditional DRAM or in situations where high memory capacity is required (1 TB or greater). vSphere 7 update 3 introduced vSphere Memory Monitoring and Remediation (vMMR), which provides visibility of performance statistics for tiered memory. For additional information, see the VMware documentation on vMMR.
NVIDIA vGPU considerations
- The addition of GPU cards does not necessarily reduce CPU utilization. Instead, it enhances the user experience and offloads specific operations that are best performed by the GPU.
- Dell Technologies recommends using the BLAST protocol for vGPU enabled desktops. NVIDIA GPUs are equipped with encoders that support BLAST.
- Virtual Workstations are typically configured with at least 2 GB video buffer. Note: A 24 GB video buffer was selected for each virtual workstation in this instance. Autodesk Maya users will typically be virtual workstation users working with NVIDIA RTX vWS technology.
- Select a suitable vGPU profile that matches your users' needs. Optimizing user density needs to be balanced with maintaining the required user experience performance.
External vCenter considerations
When using an external vCenter, the life cycle of the vCenter appliance is not managed by VxRail and must be managed manually. Before upgrading the VxRail clusters, ensure that the vCenter is upgraded to a supported version in accordance with the VxRail and external vCenter interoperability matrix. For additional information about the procedure to update the external vCenter appliance, see the Knowledge Base article VxRail: How to upgrade external vCenter appliance (login required).
Sizing considerations
Best practices for sizing your deployment include:
- User density—If concurrency is a concern, calculate how many users will use the environment at peak utilization. For example, if only 80 percent are using the environment at a time, the environment must support only that number of users (plus a failure capacity).
-
Disaster recovery (DR)—For DR planning, Dell Technologies recommends implementing a dual/multi-site solution. The goal is to keep the environment online and if there is an outage to perform an environment recovery with minimum disruption to the business.
- Management and compute clusters—For our small test environment, we used a combined management and compute cluster. For environments deployed at a larger scale, we recommend that you separate the management and compute layers. When creating a management cluster for a large-scale deployment, consider using the E-Series VxRail or the PowerEdge R650 platform to reduce the data center footprint. With a more easily configured platform, the V-Series VxRail or PowerEdge R750 platforms are preferred for compute clusters.
- Network isolation—When designing for larger-scale deployments, consider physically separating the management and VDI traffic from the vSAN traffic for traffic isolation and to improve network performance and scalability. This design illustrates a two-NIC configuration per appliance with all the traffic separated logically using VLAN.
- FTT—Dell Technologies recommends sizing storage with NumberOfFailuresToTolerate (FTT) set to 1, which means that you must double the amount of total storage to accommodate the mirroring of each VMDK.
- Capacity Reserve—With the release of vSAN 7 Update 1, the previous recommendation of reserving 30 percent of slack space has been replaced with a dynamic recommendation that depends on the cluster size, the number of capacity drives, disk groups, and features in use. Optionally, you can enable new features such as “Operations reserve” and “Host rebuild reserve” to monitor the reserve capacity threshold, generate alerts when the threshold is reached, and prevent further provisioning. Dell Technologies recommends reviewing the VMware About Reserved Capacity documentation to fully understand the changes and new options that are available.
- All-Flash compared with hybrid:
- Hybrid and all-flash configurations have similar performance results in the VDI environment under test. Because hybrid configurations use spinning drives, consider the durability of the disks.
- Only all-flash configurations offer deduplication and compression for vSAN. Dell Technologies recommends all-flash configurations for simplified data management.
- All-flash configurations need considerably less storage capacity than hybrid configurations to produce similar FTT, as shown in the following table:
Table 10. FTT comparisons VM size FTM FTT Overhead Configuration Capacity required Hosts required 50 GB RAID-1 (Mirrored) 1 2 x Hybrid 100 GB 3 50 GB RAID-5 (3+1) (Erasure coding) 1 1.33 x All-flash 66.5 GB 4 50 GB RAID-1 (Mirrored) 2 3 x Hybrid 150 GB 4 50 GB RAID-6 (4+2) (Erasure coding) 2 1.5 x All-flash 75 GB 6 Note: For more details about multi-site design considerations for Horizon, see the VMware Workspace ONE and VMware Horizon Reference Architecture.