The Dell Technologies HPC & AI Innovation Lab engineering team deployed the following network configurations for this validated design:
This design option uses an S5248-ON switch as the top-of-rack (ToR) switch for management, vSphere vMotion, and VM traffic. You can also use your existing 10 Gb or 25 Gb Ethernet network infrastructure instead of the S5248-ON switch.
The following figure shows the network topology for a 25 GbE Ethernet design with PowerSwitch networking:
Figure 8. Network topology for 25 GbE-based design with PowerSwitch network switches
The preceding figure shows network connectivity for one PowerEdge server only. The other PowerEdge servers in the vSphere cluster have similar connectivity. Two redundant S5248-ON switches are used as the ToR switches providing 25 Gb Ethernet connectivity. A ConnectX 25 GbE dual-port network adapter in the PowerEdge R750xa server provides connectivity to the ToR switches.
An S3048-ON switch provides 1 Gb Ethernet for OOB connectivity. Each PowerEdge server's iDRAC is connected to this switch.
vCenter Server and PowerScale storage have connectivity to the ToR switch.
The following figure shows the network topology for a 100 GbE Ethernet design with the PowerSwitch S5232F-ON networking switch:
Figure 9. Network topology for 100 GbE based design with Dell PowerSwitch network switches
The preceding figure shows network connectivity for one PowerEdge server only. The other PowerEdge servers in the cluster have similar connectivity. An S5232-ON switch is used as the ToR switch providing 100 Gb Ethernet connectivity for VMs with an AI workload. A ConnectX 100 GbE dual-port network adapter in the PowerEdge R750xa server provides connectivity to the 100 Gb Ethernet ToR switch.
Two redundant S5248-ON switches are used as the ToR switches providing 10 Gb or 25 Gb Ethernet connectivity for management, vSphere vMotion, and other VM traffic. An Intel or Broadcom 10 Gb Ethernet adapter (network daughter card) in the PowerEdge R750xa server provides connectivity to the S5248 switches.
The following table describes the networks that are configured as part of the validated design:
Table 6. Configured networks
Networks | Description |
vSphere Management | Used by ESXi for host management. |
vMotion | Used by ESXi for vMotion. |
vSAN | Used by ESXi for vSAN traffic. |
Supervisor Cluster Management | Management network for Supervisor Cluster control plane VMs. |
NSX Advanced Load Balancer (Avi) Management | The Management Network is where the Avi Controller, also called the Controller, resides. The Management Network provides the Controller with connectivity to the vCenter Server, ESXi hosts, and the Supervisor Cluster control plane nodes. |
NSX Advanced Load Balancer (Avi) DataNetwork | The data interface of the Avi Service Engines, also called Service Engines, connect to this network. The load balancer Virtual IPs (VIPs) are assigned from this network. |
Primary Workload Management | Additional management network for Supervisor Cluster control plane VMs. |
Workload Domain Network | Handles the traffic for the Tanzu Kubernetes cluster control plane VMs and workload traffic. |