From a logical perspective, traffic flows into and out of the on-premises system using a load-balancer. It creates dynamic connections between the compute nodes and the external network interfaces. The best practice for a PowerFlex system is to create three special-purpose networks used for management, internal, and external traffic.
The following figure shows the logical configuration between Anthos clusters running in on-premises data centers on PowerFlex, Anthos clusters on VMware, and Anthos clusters on GCP:
Figure 8: Logical design of an Anthos cluster deployed on the PowerFlex system
In this architecture, applications running on the Anthos on-premises cluster can be exposed internally or externally to the web without traffic passing through GCP.
Note: The Anthos on-premises Admin Network connection from the on-premises data center to Anthos is outbound only.
The production vCenter contains multiple VMs in a new resource pool that consists of a virtual Anthos on-premises compute cluster. Compute cluster should have at least one compute server and one resource pool in the production workload vCenter to host the workloads in the production vCenter. This cluster requires VMware Dynamic Resource Scheduling (DRS) and one resource pool. The application workloads are pods that run inside one of the Anthos on-premises compute GKE cluster virtual machines. When an application gets deployed into the Anthos on-premises compute GKE cluster, no additional vSphere virtual machines are created. If additional workload capacity is required, the Anthos on-premises compute cluster is expanded using the gkectl command-line utility or the Kubernetes Cluster API.
To summarize, the relationship between the vSphere VMs and Anthos on-premises cluster servers is:
For more information about PowerFlex networking, see PowerFlex Networking Best Practices and Design Considerations.