In a standard deployment, the VMware Cloud Foundation management WLD consists of workloads supporting the virtual infrastructure, cloud operations, cloud automation, business continuity, and security and compliance components for the SDDC. Using SDDC Manager, separate WLDs are allocated to tenant or containerized workloads. In a consolidated architecture, the VMware Cloud Foundation management WLD runs both the management workloads and tenant workloads.
There are limitations to the consolidated architecture model that must be considered:
- The conversion of consolidated to standard requires a new VI WLD domain to be created. The tenant workloads must be migrated to the new VI WLD. The recommended method for this migration is to use HCX.
- Use cases that require a VI WLD to be configured to meet specific application requirements cannot run on a consolidated architecture. The singular management WLD cannot be tailored to support management functionality and these use cases. If your plans include applications that require a specialized VI WLD (such as Horizon VDI or PKS), plan to deploy a standard architecture.
- Life-cycle management can be applied to individual VI WLDs in a standard architecture. If the applications targeted for VMware Cloud Foundation on VxRail have strict dependencies on the underlying platform, consolidated architecture 4 is not an option.
- Autonomous licensing can be used in a standard architecture, where licensing can be applied to individual VI WLDs. In a consolidated architecture, autonomous licensing is not an option.
- Scalability in a consolidated architecture has less flexibility than a standard architecture. Expansion is limited to the underlying VxRail cluster or clusters supporting the single management WLD because all resources are shared.
- If a VxRail cluster was built using two network interfaces, consolidating VxRail traffic and NSX-T traffic, additional nodes are limited to two Ethernet ports being used for VMware Cloud Foundation for VxRail.
VCF 4.1 introduced remote clusters. Remote clusters extend a VCF WLD or a VCF VxRail cluster to operate at a site that is remote from the central VCF instance from which it is managed. All the VMware Cloud Foundation operational management can be administered from the central or regional data center out to the remote sites, which is important because:
- It eliminates the need to have technical or administrative support personnel at the remote locations resulting in better efficiencies with significantly lower operating expenses.
- Edge compute processing also allows customers to comply with data locaty requirements that are driven by local government regulations.
- VCF Remote VxRail clusters establish a means to standardize operations and centralize the administration and software updates to all the remote locations.
The following figure illustrates the remote VxRail cluster feature with three different Edge sites where the remote VxRail clusters are located:
Figure 5. Remote VxRail cluster deployment
Remote VxRail cluster deployments
The following requirements must be met for remote VxRail cluster deployments. Failure to adhere to these requirements will lead to system integrity, instability, resiliency, and security issues of the Edge workload.
- 10 Mbps bandwidth.
- 50-millisecond RTT latency.
- Support for only three or four nodes at the Edge (ROBO) sites.
- Primary and secondary active WAN links (not required, but highly recommended).
- DNS and NTP Server available locally or reachable from central site to Edge site.
- A DHCP server that is available for the NSX-T host overlay (Host TEP) VLAN of the WLD. When NSX-T creates Edge Tunnel End Points (TEPs) for the VI WLD, the TEPs are assigned IP addresses from the DHCP server. The DHCP server should be available locally at the Edge site.
There are essentially two ways to deploy remote VxRail clusters. You can either use a dedicated WLD per site with one or more VxRail clusters per WLD or deploy VxRail clusters at the remote location in a WLD with an existing VxRail cluster in the central location. The following figure shows a WLD deployed for each remote site with two VxRail clusters in Edge site 1 VI WLD 02 and one VxRail cluster deployed at Edge site 2 in VI WLD 03:
Figure 6. Remote WLD deployment model
The second deployment option is to deploy each site as a remote VxRail cluster in an existing VI WLD. This option reduces the number of VI WLDs and vCenter instances needed for the remote deployments, as shown in the following figure. In this scenario, we have an existing VI WLD 02 with a VxRail cluster from the central site. Remote VxRail clusters from two different Edge sites have been added to this WLD.
Figure 7. Remote VxRail cluster deployment model
Remote VxRail cluster network design
The remote sites require NSX-T Edge Nodes to be deployed at each site for north-south connectivity. Also, connectivity from the central site to the remote site must be maintained to ensure connectivity of management components such as vCenter, SDDC Manager, NSX-T Manager, and so forth. As mentioned in the requirements, if DNS and NTP servers are running in the central site, they must be reachable from the Edge site.
Figure 8. Remote VxRail cluster network design