In a standard deployment, the Cloud Foundation management WLD consists of workloads supporting the virtual infrastructure, cloud operations, cloud automation, business continuity, and security and compliance components for the SDDC. Using SDDC Manager, separate WLDs are allocated to tenant or containerized workloads. In a consolidated architecture, the Cloud Foundation management WLD runs both the management workloads and tenant workloads.
There are limitations to the consolidated architecture model that must be considered:
- The conversion of consolidated to standard requires a new VI WLD domain to be created. The tenant workloads must be migrated to the new VI WLD. The recommended method for this migration is to use HCX.
- Use cases that require a VI WLD to be configured to meet specific application requirements cannot run on a consolidated architecture. The singular management WLD cannot be tailored to support management functionality and these use cases. If your plans include applications that require a specialized VI WLD (such as Horizon VDI or PKS), plan to deploy a standard architecture.
- Life-cycle management can be applied to individual VI WLDs in a standard architecture. If the applications targeted for Cloud Foundation on VxRail have strict dependencies on the underlying platform, consolidated architecture.is not an option.
- Autonomous licensing can be used in a standard architecture, where licensing can be applied to individual VI WLDs. In a consolidated architecture, this is not an option.
- Scalability in a consolidated architecture has less flexibility than a standard architecture. Expansion is limited to the underlying VxRail cluster or clusters supporting the single management WLD, as all resources are shared.
- If a VxRail cluster was built using two network interfaces, consolidating VxRail traffic and NSX-T traffic, additional nodes added to a VxRail cluster are limited to two Ethernet ports being used for Cloud Foundation for VxRail.
Remote VxRail clusters
VCF 4.1 introduced remote clusters as a feature that extends a VCF WLD or a VCF VxRail cluster in order to operate at a site that is remote from the central VCF instance from which it is managed. All the Cloud Foundation operational management can be administered from the central or the regional data center out to the remote sites. Central administration and management is an important aspect because:
- It eliminates the need to have technical or administrative support personnel at the remote locations resulting in better efficiencies with much lower operating expenses.
- Edge compute processing also allows customers to comply with data locality requirements that are driven by local government regulations.
- VCF Remote VxRail clusters establish a means to standardize operations and centralize the administration and software updates to all the remote locations.
The following diagram illustrates the remote VxRail cluster feature with three different edge sites where the remote VxRail clusters are located.
Figure 5. Remote VxRail Cluster Deployment
Remote VxRail Cluster Deployments
The following requirements must be met for remote VxRail cluster deployments:
- Bandwidth of 10 Mbps
- Latency is 50 millisecond RTT.
- Support only 3-4 Nodes at the Edge (ROBO) sites.
- Primary and secondary active WAN links are highly recommended.
- DNS and NTP Server is available locally or are reachable to Edge site from Central site.
- A DHCP server must be available for the NSX-T host overlay (Host TEP) VLAN of the WLD. When NSX-T creates Edge Tunnel End Points (TEPs) for the VI WLD, they are assigned IP addresses from the DHCP server. The DHCP server should be available locally at the Edge site.
Failure to adhere to these requirements will lead to system integrity, instability, resiliency, and security of the edge workload.
There are essentially two ways to deploy remote VxRail clusters. You can either use a dedicated WLD per site with one or more VxRail clusters per WLD or deploy VxRail clusters at the remote location in a WLD with an existing VxRail cluster in the central location. The following diagram shows a WLD deployed for each remote site with two VxRail clusters in Edge site 1 VI WLD 02 and one VxRail cluster deployed at Edge site 2 in VI WLD 03.
Figure 6. Remote WLD deployment model
The second deployment option is to deploy each site as a remote VxRail cluster in an existing VI WLD. This option reduces the number of VI WLDs and vCenters needed for the remote deployments as shown in the following diagram. In this scenario, we have an existing VI WLD 02 with a VxRail cluster from the central site and remote VxRail clusters from two different edge sites have been added to this WLD.
Figure 7. Remote VxRail cluster deployment model
Remote VxRail cluster network design
The remote sites require NSX-T edges to be deployed at each site for North/South connectivity. Also, connectivity from the central site to the remote site must be maintained to ensure connectivity of management components such as vCenter, SDDC Manager, NSX-T Manager, and so forth. As mentioned in the requirements, if DNS and NTP servers are running in the central site, they must be reachable from the Edge site.
Figure 8. Remote VxRail cluster network design