This section describes the key hardware and software components of the solution.
VxRail is a hyperconverged infrastructure (HCI) appliance that is available in 1U or 2U rack building blocks. VxRail is built on VMware vSAN technology and further enabled with Dell EMC VxRail HCI System software. The following figure shows the components of the VxRail appliance:
VxRail appliance platforms are equipped with Intel Xeon Scalable Processors. You can deploy a cluster with as few as two nodes, providing an ideal environment for small deployments. Most clusters will start with three nodes so they can grow up to 64 nodes. To achieve full vSAN high availability, the recommended starting block is four nodes. In the VCF on VxRail use case, eight nodes are required — four for the management domain and four for the first workload domain.
The VxRail appliance can support storage-heavy workloads with storage-dense nodes, graphics-heavy VDI workloads with GPU hardware, and entry-level nodes for remote and branch office environments.
The VxRail appliance enables you to start small and scale as your requirements increase. Single-node scaling and low-cost entry point options give you the freedom to buy just the right amount of storage and compute resources to start, and then add capacity to support growth. A single-node VxRail V Series appliance can be configured with 16 to 56 CPU cores per node and support a maximum of 40 TB raw storage with a hybrid configuration of 76 TB with the all-flash option. A 64-node all-flash cluster delivers a maximum of 3,584 cores and 4,864 TB of raw storage. The following table shows the available platforms:
All-flash and hybrid
All-flash and hybrid
All-flash and hybrid
Dell EMC Ready Architectures VDI-optimized configurations
For graphics-intensive desktop deployments, we recommend the VDI-optimized 2U/2-socket appliances that support GPU hardware.
The VxRail V Series can be configured with or without GPUs. Dell EMC also offers similar configurations in an E Series 1U/1 node appliance.
The following table designates common configurations. These designations are referenced throughout this guide.
Management Domain – VxRail E460F
2 x Intel Xeon Silver 4214
(12 core @ 2.2 GHz)
(12 x 16 GB @ 2400 MHz)
4 TB + (Capacity)
Offers a scalable and value-targeted configuration that meets the compute and I/O demands in the VCF management domain
Density-Optimized – VxRail V570F
2 x Intel Xeon Gold 6248
(20 core @ 2.5 GHz)
(12 x 32 GB + 12 x 16 GB @2666 MHz)
8 TB + (Capacity)
Up to 3 x full length dual width (FLDW)
Up to 6 x full length single width (FLSW)
Offers an abundance of high-performance features and tiered capacity that maximizes user density
Virtual Workstation – VxRail V570F
2 x Intel Xeon Gold 6254
(18 core @ 3.1 GHz)
(12 x 32 GB @ 2933 MHz)
6 TB + (Capacity)
Up to 3 x FLDW
Up to 6 x FLSW
Offers even higher performance at the trade-off of user density. Typically for ISV or high-end graphics workloads
vSAN software-defined storage
vSAN is available in hybrid or all-flash configurations. The following figure shows the logical layout and components of vSAN in this solution for the on-premises infrastructure with VCF on VxRail:
After vSAN is enabled on a cluster, all the disk devices that are presented to the hosts are pooled together to create a shared data store that is accessible by all hosts in the VMware vSAN cluster. This process is fully automated when using the VxRail First Run process. VMs can then be created with storage policies assigned to them. The storage policy dictates availability, performance, and sizing.
vSAN provides the following configuration options:
NVIDIA GPU accelerators provide high performance for demanding enterprise data center workloads. For enterprises deploying VDI, NVIDIA accelerators are ideally suited for accelerating virtual desktops. GPUs can be used in the V570, V570F, E560, E560F, and E560N appliance configurations.
An NVIDIA vGPU brings the full benefit of NVIDIA hardware-accelerated graphics to virtualized solutions. This technology provides exceptional graphics performance for virtual desktops equivalent to local PCs when sharing a GPU among multiple users.
NVIDIA GRID and Quadro vDWS vGPU is advanced technology for sharing true GPU hardware acceleration between multiple virtual desktops without compromising the graphics experience. NVIDIA offers the following vGPU software variants to enable graphics for different virtualization techniques:
Dell EMC Ready Architectures for VDI can be configured with NVIDIA Tesla T4 (Turing architecture) GPUs. NVIDIA's newest architecture is available in the T4 GPU, which is considered the universal GPU for data center workflows. Add up to six GPU cards into your V570F appliance to enable up 96 GB of video buffer. For modernized data centers, use this card in off-peak hours to perform your inferencing workloads.
Networking components—physical networking
Ready Architectures for VDI for appliances allow for flexibility in networking selections. VDI validations have been successfully performed with the following hardware, although several other choices are available. All three switches that are listed in this section support Open Network Install Environment (ONIE) for the zero-touch installation of alternate network operating systems.
For more information, see PowerSwitch Data Center Switches.
NSX Data Center for vSphere
The foundation of the network virtualization layer for VCF on VxRail is provided by NSX Data Center for vSphere or VMware NSX-T Data Center, commonly referred to as NSX-V and NSX-T respectively. At the time of writing, the VCF Management domain only supports NSX-V but the VI WLD domains can use either NSX-V or NSX-T. These solutions provide a software-defined networking approach that delivers Layer 2 to Layer 7 networking services (for example, switching, routing, firewalling, and load balancing) in software. These services can be programmatically assembled in any arbitrary combination, producing unique, isolated virtual networks in a matter of seconds. NSX-T is considered the next generation and provides additional features that NSX-V does not provide. However, at the time of writing NSX-T is not supported with the VMware Horizon workload deployment.
VMware NSX-T Data Center
VMware NSX-T Data Center (formerly NSX-T) provides an agile software-defined infrastructure to build cloud-native application environments.
NSX-T Data Center is focused on providing networking, security, automation, and operational simplicity for emerging application frameworks and architectures that have heterogeneous endpoint environments and technology stacks. NSX-T Data Center supports cloud-native applications, bare-metal workloads, multihypervisor environments, public clouds, and multiple clouds.
NSX-T Data Center is designed for management, operation, and consumption by development organizations. NSX-T Data Center enables IT and development teams to select the technologies that are best suited for their applications.
VMware NSX-T Data Center is the network virtualization platform for the SDDC, delivering networking and security entirely in software, abstracted from the underlying physical infrastructure. NSX-T is used exclusively in VMC on AWS.
The maximum number of ports per logical network is 1,000. Since multiple VLANs are not supported on NSX with Horizon, the maximum size of the Horizon 7 pool is also limited to 1,000. However, you can create multiple pools using different logical networks.
Connectivity between your SDDC and on-premises data center
The connection between your VMC on AWS SDDC and On-Premises Data Center will be a critical component to plan and implement. Many different topologies can be implemented using a combination of physical and virtual networking components. The following are common methods of connecting your On-Premises Data Center to VMC on AWS:
VMware Cloud Foundation
Cloud Foundation is an integrated software stack that bundles compute virtualization (VMware vSphere), storage virtualization (VMware vSAN), network virtualization (VMware NSX), and cloud management and monitoring (VMware vRealize Suite) into a single platform that can be deployed on premises as a private cloud or run as a service within a public cloud. This document focuses on the private cloud use case. Cloud Foundation helps to break down the traditional administrative silos in data centers, merging compute, storage, network provisioning, and cloud management to facilitate end-to-end support for application deployment.
VMware Cloud Foundation on VxRail
Cloud Foundation on VxRail delivers a consistent infrastructure and consistent operations with edge, private and public cloud workload deployment options for a true hybrid cloud solution, while allowing businesses to maintain flexibility of networking and topology.
For more simplified cloud deployment, Cloud Builder is a standardized automation tool that has been engineered to integrate with VxRail for deploying and configuring Cloud Foundation according to VMware’s SDDC standardized architecture.
Full stack integration with Cloud Foundation on VxRail means both the HCI infrastructure layer and the VMware cloud software stack life cycle are managed as one, complete, automated, turnkey hybrid cloud experience, greatly reducing risk and increasing IT operational efficiency.
VxRail HCI system software includes unique integration between SDDC Manager and VxRail Manager that combines operational transparency with automation, support, and serviceability capabilities not found when deploying VMware Cloud Foundation on any other infrastructure.
The VMware Cloud Foundation on Dell EMC VxRail Administration Guide provides information about the VMware Cloud Foundation workflow on VxRail. For information about configuring VxRail, see the Dell EMC VxRail documentation on SolVe.
The following figure shows the VMware Cloud Foundation solution:
VMware SDDC Manager
VMware SDDC Manager provisions, manages, and monitors the logical and physical resources of Cloud Foundation by performing the following activities:
As you expand your Cloud Foundation environment horizontally by adding physical racks, the SDDC Manager enables data center administrators to configure the additional racks into a single pool of resources. This consolidates the compute, storage, and networking resources of the racks that are available for assignment to workloads.
VMware Cloud Foundation on VxRail uses VxRail Manager to deploy and configure vSphere clusters powered by vSAN. VxRail Manager is also used to run the life cycle management of ESXi, vSAN, and hardware firmware using a fully integrated and seamless SDDC Manager-orchestrated process. It monitors the health of hardware components and provides remote service support. This level of integration provides a truly unique, turnkey hybrid cloud experience that is not available on any other infrastructure.
VxRail Manager, which is available on VxRail appliances only, is the primary deployment and element manager interface of the appliance. It simplifies the entire life cycle from deployment through management, scaling, and maintenance. It also enables single-click upgrades and dashboard monitoring for health, events, and physical views.
The VMware Cloud on AWS minimum standard cluster configuration contains three hosts. Each host is an Amazon EC2 I3.metal instance. At the time of writing, these hosts have dual 2.3 GHz CPUs (a custom-built Intel Xeon Processor E5-2686 v4 CPU package) with 18 cores per socket (36 cores in total), 512 GiB RAM, and 15.2 TB Raw NVMe storage.
To take advantage of GPUs on VMC on AWS, you must deploy an EC2 instance in your connected Amazon VPC and configure AWS security policies and compute gateway firewall rules to allow a connection between your SDDC and that instance. For additional information, see the VMware Cloud on AWS Documentation.
The EC2 G4 instance is capable of graphics-intensive applications that are delivered as remote graphics workstations. It offers the following:
G4 instances are offered in different sizes with access to different amounts of vCPU and memory. For the product details for these instances, go to Amazon EC2 G4 Instances.
The g4dn.xlarge and g4dn.2xlarge should address most VDI use cases, but many options are available should you require additional performance.
VMware HCX is an application mobility platform that is designed for simplifying application migration, workload rebalancing, and business continuity across data centers and clouds.
The VMware HCX platform provides a hybrid interconnect to enable simple, secure, and scalable application migration and mobility within and across data centers and clouds.
HCX would normally be deployed on top of an already established VPN or Direct Connection network connection between an SDDC in VMC on AWS and your on-premises data center.
Connecting to an SDDC
To connect your on-premises data center to your VMware Cloud on AWS SDDC, you can create a VPN that uses the public internet, a VPN that uses AWS Direct Connect, or just use AWS Direct Connect alone.
VMware HCX is designed for the following use cases:
Cloud-enabled features and workload mobility
When migrating a VM to the cloud to reduce the downtime, the source VM remains online during the replication and is bootstrapped on the destination ESXi host after replication completes. This migration also leaves behind a copy of the migrated VM. This is in case there are issues with the VM migration. The copy can act as a seed if the VM on Site B must be protected on Site A. The following figure shows a VM migration.
The VM network is extended across to the cloud from the on-premises VM. This allows all pools that are created with Horizon to connect to the same domain and be on the same subnets. The following figure shows the network extension.
The on-premises VM is protected in the cloud, which allows for quick and easy restoring if anything happens. Multiple snapshots can be recovered. The following figure shows disaster recovery.