This chapter describes the architecture required to provide a common VxRail Hyperconverged Infrastructure that supports a variety of workloads and which is designed to support various end-user profiles in FSI environments. As mentioned previously, the use of VxRail is one approach, and other approaches may be better suited for certain organizations, depending on their requirements.
The solution defined here focuses on the financial vertical market in which various user profiles exist, and for which configuration guidance is provided for the VDI environment. Both graphical and non-graphical workloads and associated financial applications have been characterized and validated in Dell Technologies engineering labs on VDI-optimized VxRail systems.
The core elements of the solution are as follows:
- VxRail or Dell Technologies Cloud Platform (DTCP) based on VxRail:
- Minimum of 4 VxRail E560F (management domain)
- Minimum of 4 VxRail V570F (compute domain)
- NVIDIA T4 Tensor Core GPUs (on single host)
- Software:
- VMware Cloud Foundation 4.x
- VMware SDDC (VMware vSphere and vSAN)
- VMware Horizon 2012 (8.1)
- VMware Dynamic Environment Manager
- VMware App Volumes
- NVIDIA Virtual GPU
This section provides an architecture overview and guidance on managing and scaling a VMware Horizon 8 environment on VxRail systems.
The following figure shows the architecture of the validated solution, including the network, compute, management, and storage layers. This architecture aligns with the VMware Horizon block/pod design. A pod is made up of a group of interconnected Horizon Connection Servers that broker connections to desktops or published applications. A pod has multiple blocks to provide scalability, and a block is a collection of one or more vSphere clusters hosting pools of desktops or applications. Each block has a dedicated vCenter Server.

The deployment option for this Dell Technologies VDI Solution supports all cloning methods available from VMware, including full and instant clones.
A vSphere cluster can have a maximum of 64 nodes and 6,400 virtual machines (VMs) per vSAN-enabled cluster. To expand beyond this limit, you can add clusters and balance the VMs and nodes across the new clusters.