In a standard deployment, the Cloud Foundation management workload domain consists of workloads supporting the virtual infrastructure, cloud operations, cloud automation, business continuity, and security and compliance components for the SDDC. Using SDDC Manager, separate workload domains are allocated to tenant or containerized workloads. In a consolidated architecture, the Cloud Foundation management workload domain runs both the management workloads and tenant workloads.
There are limitations to the consolidated architecture model that must be considered:
- The decision to deploy a consolidated architecture must be made at the time of deployment, as a consolidated architecture cannot be converted to a standard architecture.
- Use cases that require a VI workload domain to be configured to meet specific application requirements cannot run on a consolidated architecture. The singular management workload domain cannot be tailored to support management functionality and these use cases. If your plans include applications that require a specialized VI workload domain (such as Horizon VDI or PKS), plan to deploy a standard architecture.
- Life-cycle management can be applied to individual VI workload domains in a standard architecture. If the applications targeted for Cloud Foundation on VxRail have strict dependencies on the underlying platform, consolidated architecture.is not an option.
- Autonomous licensing can be used in a standard architecture, where licensing can be applied to individual VI workload domains. In a consolidated architecture, this is not an option.
- Scalability in a consolidated architecture has less flexibility than a standard architecture. Expansion is limited to the underlying VxRail cluster or clusters supporting the single management workload domain in a consolidated architecture, as all resources are shared.
- If a VxRail cluster was built using two network interfaces, consolidating VxRail traffic and NSX-T traffic, additional nodes added to a cluster are limited to two Ethernet ports being used for Cloud Foundation for VxRail.
Remote Clusters
In VCF 4.1 remote clusters were introduced as a feature that extends a VCF Workload Domain or a VCF cluster in order to operate at a site that is remote from the central VCF instance from which it is managed. All of the Cloud Foundation operational management can be administered from the central or the regional data center out to the remote sites. Central administration and management is an important aspect because:
- It eliminates the need for having technical or administrative support personnel at the remote locations resulting better efficiencies with much lower operating expenses.
- Edge compute processing also allows customers to comply with data locality requirements that are driven by local government regulations.
- VCF Remote Clusters establish a means to standardize operations and centralize the administration and software updates to all the remote locations.
The diagram below illustrates the remote cluster feature with three different edge sites where the remote clusters are located.

Figure 5. Remote Cluster Deployment
Remote Cluster Deployments
The following requirements must be met for remote cluster deployments:
- Bandwidth 10 Mbps
- Latency is 50 millisecond RTT.
- Support only 3-4 Nodes at the Edge (ROBO) sites.
- Primary and secondary active WAN links are highly recommended.
- DNS and NTP Server is available locally or they are reachable to Edge site from Central site.
- A DHCP server must be available for the NSX-T host overlay (Host TEP) VLAN of the workload domain. When NSX-T creates Edge Tunnel End Points (TEPs) for the VI workload domain, they are assigned IP addresses from the DHCP server. DHCP Server should be available locally at the Edge site.
Failure to adhere to the above requirements will lead to system integrity, instability, resiliency, and security of the edge workload.
There are essentially two ways to deploy remote clusters, either using a dedicated WLD per site with one or more clusters per WLD or clusters deployed at the remote location in a WLD with an existing cluster in the central location. The following diagram shows a WLD deployed for each remote site with two clusters in edge site 1 VI WLD 02 and one cluster deployed at edge site 2 in VI WLD 03.

Figure 6. Remote WLD deployment model
The second deployment option is to deploy each site as a remote cluster in an existing VI WLD, this reduces the number of VI WLDs and vCenters needed for the remote deployments as shown here in the following diagram. In this scenario, we have an existing VI WLD 02 with a cluster from the central site and remote clusters from two different edge sites have been added to this WLD.

Figure 7. Remote cluster deployment model
Remote cluster network design
The remote sites will require NSX-T edges to be deployed at each site for north-south connectivity. Also, connectivity from the central site to the remote site must be maintained to ensure connectivity of management components such as vCenter, SDDC Manager, NSX-T Manager, etc. As mentioned in the requirements, if DNS and NTP Servers are running in the central site, they must be reachable from the Edge site.

Figure 8. Remote Cluster network design