From a logical perspective, traffic flows into and out of the on-premises system using a load-balancer. Let us consider the example with F5 BIG-IP LTM. The F5 BIG-IP LTM creates dynamic connections between the compute nodes and the external network interfaces. Current best practice for PowerFlex rack is to create three special-purpose networks used for management, internal, and external traffic:
Create a new compute cluster with at least one compute server and one resource pool in the production workload vCenter to host the workloads in the production vCenter. This cluster requires VMware Dynamic Resource Scheduling (DRS) and one resource pool.
The following figure describes the logical configuration between Anthos clusters running in on-premises data centers on PowerFlex rack, Anthos clusters on VMware, and Anthos clusters on GCP:
Figure 4. Logical design of Anthos with Anthos cluster deployed on PowerFlex rack
In this architecture, applications running on Anthos on-prem cluster can be exposed internally or externally to the web without traffic passing through GCP.
Note: The Anthos on-prem Admin Network connection from the on-premises data center to Anthos is outbound only.
The production vCenter server hosts multiple virtual machines in the new resource pool that consists of a virtual Anthos on-prem compute cluster. The application workloads are processes that run inside one of the Anthos on-prem compute cluster virtual machines. When an application gets deployed into the Anthos on-prem compute cluster, no additional vSphere virtual machines are created. The application runs inside the Anthos on-prem compute cluster virtual machines. If additional workload capacity is required, the Anthos on-prem compute cluster is expanded using the gkectl command-line utility or the Kubernetes Cluster API.
The summary of the correlation between vSphere VMs and Anthos on-prem cluster servers are as follows: