This section provides best practices for sizing your VDI deployment.
Platform configurations
With several configurations to choose from, consider these basic differences:
- The Density Optimized configuration provides a good balance of performance and scalability for various general-purpose VDI workloads.
- The Virtual Workstation configuration provides the highest levels of performance for more specialized VDI workloads, which means you can use it with ISV and high-end computing workloads.
CPU
User density and graphics considerations include:
- For architectures with Ice Lake processors:
- Task workers—6.6 users per core. For example, 105 power users with dual eight-core processors
- Knowledge workers—4.3 users per core. For example, 70 knowledge users with dual eight-core processors
- For graphics:
- For high-end graphics configurations with NVIDIA vWS graphics enabled, choose higher clock speeds over higher core counts. Many applications that benefit from high-end graphics are engineered with single-threaded CPU components. Higher clock speeds benefit users more in these workloads.
- For NVIDIA vPC configurations, use higher core counts over faster clock speeds to reduce oversubscription.
- Most graphics configurations do not experience high CPU oversubscription because vGPU resources are likely to be the resource constraint in the appliance.
Memory
Best practices for memory allocation and configuration include:
- Do not overcommit memory when sizing because memory is often not the constraining resource. Overcommitting memory increases the possibility of performance degradation if contention for memory resources occurs, such as swapping and ballooning of memory. Overcommitted memory can also affect storage performance when swap files are created.
- Populate memory in units of eight DIMMs per CPU to yield the highest performance. Dell EMC PowerEdge servers using 3rd Generation Intel Xeon Scalable processors have eight memory channels per CPU, which are controlled by four internal memory controllers, each handling two memory channels. To ensure that your environment has the optimal memory configuration, use a balanced configuration, where each CPU supports a maximum of 16 DIMMs (or 32 DIMMs for a dual-CPU server). The most effective configuration is 16 DIMMs (8 per processor) with Intel Xeon Scalable processors.
- Use Intel Optane Persistent Memory (PMem) for cost savings over traditional DRAM or in situations where high memory capacity is required (1TB or greater). vSphere 7 update 3 introduced vSphere Memory Monitoring and Remediation (vMMR), which provides visibility of performance statistics for tiered memory. For additional information, please see the following VMware documentation on vMMR.
NVIDIA vGPU considerations
- vPC licenses that support up to 2 GB of frame buffer and up to two 4K monitors or a single 5K monitor to cover most traditional VDI users. Maximum node density for graphics-accelerated use can typically be calculated as the available frame buffer per node divided by the frame buffer size.
- The addition of GPU cards does not necessarily reduce CPU utilization. Instead, it enhances the user experience and offloads specific operations best performed by the GPU.
- Dell Technologies recommends using the BLAST protocol for vGPU enabled desktops. NVIDIA GPUs are equipped with encoders that support BLAST.
- Virtual Workstations are typically configured with at least 2 GB video buffer.
External vCenter considerations
When using an external vCenter, the life cycle of the vCenter appliance is not managed by VxRail and must be managed manually. Before upgrading the VxRail clusters, ensure that the vCenter is upgraded to a supported version in accordance with the VxRail and external vCenter interoperability matrix. For additional information about the procedure to update the external vCenter appliance, see the VxRail: How to upgrade external vCenter appliance Knowledge Base article (login required).
Sizing considerations
This section provides best practices for sizing your deployment.
- User density—If concurrency is a concern, calculate how many users will use the environment at peak utilization. For example, if only 80 percent are using the environment at any time, the environment must support only that number of users (plus a failure capacity).
-
Disaster recovery—For DR planning, Dell Technologies recommends implementing a dual/multi-site solution. The goal is to keep the environment online and, in case of an outage, to perform an environment recovery with minimum disruption to the business.
- Management and compute clusters—For our small test environment, we used a combined management and compute cluster. For environments deployed at a larger scale, we recommend that you separate the management and compute layers. When creating a management cluster for a large-scale deployment, consider using the E-Series VxRail or the PowerEdge R650 platform to reduce the data center footprint. With a more easily configured platform, the V-Series VxRail or PowerEdge R750 platforms are preferred for compute clusters.
- Network isolation—When designing for larger-scale deployments, consider physically separating the management and VDI traffic from the vSAN traffic for traffic isolation and to improve network performance and scalability. This design illustrates a two-NIC configuration per appliance with all the traffic separated logically using VLAN.
-
FTT—Dell Technologies recommends sizing storage with NumberOfFailuresToTolerate (FTT) set to 1, which means that you must double the amount of total storage to accommodate the mirroring of each VMDK.
- Capacity Reserve—With the release of vSAN 7 Update 1, the previous recommendation of reserving 30 percent of slack space has been replaced with a dynamic recommendation that depends on the cluster size, the number of capacity drives, disk groups, and features in use. New features such as “Operations reserve” and “Host rebuild reserve” can be optionally enabled to monitor the reserve capacity threshold, generate alerts when the threshold is reached, and prevent further provisioning. Dell Technologies recommends reviewing VMware's About Reserved Capacity documentation to fully understand the changes and new options available.
- All-Flash compared with hybrid:
- Hybrid and all-flash configurations have similar performance results in the VDI environment under test. Because hybrid configurations uses spinning drives, consider the durability of the disks.
- Only all-flash configurations offer deduplication and compression for vSAN. Dell Technologies recommends all-flash configurations for simplified data management.
- All-flash configurations need considerably less storage capacity than hybrid configurations to produce similar FTT, as shown in the following table:
Table 8. FTT comparisons VM size FTM FTT Overhead Configuration Capacity required Hosts required 50 GB RAID-1 (Mirrored) 1 2 x Hybrid 100 GB 3 50 GB RAID-5 (3+1) (Erasure coding) 1 1.33 x All-flash 66.5 GB 4 50 GB RAID-1 (Mirrored) 2 3 x Hybrid 150 GB 4 50 GB RAID-6 (4+2) (Erasure coding) 2 1.5 x All-flash 75 GB 6 Note: The Citrix Design Decision: Designing StoreFront and Multi-Site Aggregation provides more details about multi-site design considerations for Citrix Virtual Apps and Desktops.