This section provides best practices for sizing your VDI deployment.
With several configurations to choose from, consider these basic differences:
- The Density Optimized configurations provide a good balance of performance and scalability for various general-purpose VDI workloads.
- The Virtual Workstation configurations provide the highest levels of performance for more specialized VDI workloads, which means you can use it with ISV and high-end computing workloads.
User density and graphics considerations include:
- For architectures with 3rd Gen AMD EPYC processors:
- Task workers—6.2 users per core. For example, 99 task users with dual eight-core processors
- Knowledge workers—4.9 users per core. For example, 78 knowledge users with dual eight-core processors
- For graphics:
- For high-end graphics configurations with NVIDIA vWS graphics enabled, choose higher clock speeds over higher core counts. Many applications that benefit from high-end graphics are engineered with single-threaded CPU components. Higher clock speeds benefit users more in these workloads.
- For NVIDIA vPC configurations, use higher core counts over faster clock speeds to reduce oversubscription.
- Most graphics configurations do not experience high CPU oversubscription because vGPU resources are likely to be the resource constraint in the appliance.
Best practices for memory allocation and configuration include:
- Do not overcommit memory when sizing, because memory is often not the constraining resource. Overcommitting memory increases the possibility of performance degradation if contention for memory resources occurs (for example, swapping and ballooning of memory). Overcommitted memory can also impact storage performance when swap-files are created.
- Populate memory in units of eight per CPU to yield the highest performance. Dell PowerEdge servers using 3rd Gen AMD EPYC processors have eight memory channels per CPU, which are controlled by eight internal memory controllers, each handling one memory channel with up to two memory DIMMs. To ensure that your environment has the optimal memory configuration, use a balanced configuration where each CPU supports a maximum of 16 DIMMs (or 32 DIMMs for a dual-CPU server). The most effective configuration is 16 DIMMS (8 per processor) with 3rd Gen AMD EPYC processors, but 32 DIMMs have been found to perform acceptably as well. For more information, see Memory Population Rules for 3rd Generation AMD EPYC CPUs on PowerEdge Servers.
NVIDIA vGPU considerations
- vPC licenses that support up to 2 GB of frame buffer and up to two 4K monitors or a single 5K monitor to cover most traditional VDI users. Maximum node density for graphics-accelerated use can typically be calculated as the available frame buffer per node divided by the frame buffer size.
- The addition of GPU cards does not necessarily reduce CPU utilization. Instead, it enhances the user experience and offloads specific operations best performed by the GPU.
- Dell Technologies recommends using the BLAST protocol for vGPU enabled desktops. NVIDIA GPUs are equipped with encoders that support BLAST.
- Virtual Workstations are typically configured with at least 2 GB video buffer.
External vCenter considerations
When using an external vCenter, the life cycle of the vCenter appliance is not managed by VxRail and must be managed manually. Before upgrading the VxRail clusters, ensure that the vCenter is upgraded to a supported version in accordance with the VxRail and external vCenter interoperability matrix. For additional information about the procedure to update the external vCenter appliance, see the VxRail: How to upgrade external vCenter appliance Knowledge Base article (login required).
Best practices for sizing your deployment include:
- User density—If concurrency is a concern, calculate how many users will use the environment at peak utilization. For example, if only 80 percent are using the environment at a time, the environment must support only that number of users (plus a failure capacity).
Disaster recovery—For DR planning, Dell Technologies recommends implementing a dual/multi-site solution. The goal is to keep the environment online and, in case of an outage, to perform an environment recovery with minimum disruption to the business.
- Management and compute clusters—For small test environments, it is acceptable to use a combined management and compute cluster. For environments deployed at a larger scale, we recommend that you separate the management and compute layers. When creating a management cluster for a large-scale deployment, consider using the E-Series VxRail or the PowerEdge R6515 platform to reduce the data center footprint. With a more easily configured platform, the P-Series VxRail or PowerEdge R7525 platforms are preferred for compute clusters.
- Network isolation—When designing for larger-scale deployments, consider physically separating the management and VDI traffic from the vSAN traffic for traffic isolation and to improve network performance and scalability. This design illustrates a two-NIC configuration per appliance with all the traffic separated logically using VLAN.
FTT—Dell Technologies recommends sizing storage with NumberOfFailuresToTolerate (FTT) set to 1, which means that you must double the amount of total storage to accommodate the mirroring of each VMDK.
- Capacity Reserve—With the release of vSAN 7 Update 1, the previous recommendation of reserving 30 percent of slack space has been replaced with a dynamic recommendation that depends on the cluster size, the number of capacity drives, disk groups, and features in use. New features such as “Operations reserve” and “Host rebuild reserve” can be optionally enabled to monitor the reserve capacity threshold, generate alerts when the threshold is reached, and prevent further provisioning. Dell Technologies recommends reviewing VMware's About Reserved Capacity documentation to fully understand the changes and new options available.
- All-Flash compared with hybrid:
- Hybrid and all-flash configurations have similar performance results in the VDI environment under test. Because hybrid configurations use spinning drives, consider the durability of the disks.
- Only all-flash configurations offer deduplication and compression for vSAN. Dell Technologies recommends all-flash configurations for simplified data management.
- All-flash configurations need considerably less storage capacity than hybrid configurations to produce similar FTT, as shown in the following table:
Table 11. FTT comparisons VM size FTM FTT Overhead Configuration Capacity required Hosts required 50 GB RAID-1 (Mirrored) 1 2 x Hybrid 100 GB 3 50 GB RAID-5 (3+1) (Erasure coding) 1 1.33 x All-flash 66.5 GB 4 50 GB RAID-1 (Mirrored) 2 3 x Hybrid 150 GB 4 50 GB RAID-6 (4+2) (Erasure coding) 2 1.5 x All-flash 75 GB 6Note: The VMware Workspace ONE and VMware Horizon Reference Architecture provides more details about multi-site design considerations for Horizon.