For the Dell EMC reference architecture, we provisioned each server with 4 x 25 Gbe NIC ports that are cross-wired to the network switches. A 25 GbE network is the preferred primary fabric for internode communication. Two S5248F-ON or S5232F-ON switches provide data layer communication, while one S3048-ON switch is used for OOB management.
The following figure shows the network architecture.
Figure 7. Virtual Link Trunking (VLT)
This reference architecture is designed to deliver maximum availability and enough network bandwidth that storage performance and compute performance are not limited by available network bandwidth. Each server has 4 x 25 Gbe ports. Two ports are provided on the Network Daughter Card (NDC) and two more from a dual port NIC. One port from each dual port NIC goes to ToR switch A, and the other port from each NIC is wired to ToR Switch B.
Virtual Link Trunking (VLT) is a layer-2 link aggregation protocol between end devices connected to two switches. VLT offers a redundant, load-balancing connection to the core network in a loop-free environment and eliminates the need to use a spanning tree protocol. VLT allows link connectivity between a server and the network over two different switches. VLT can also be used for uplinks between access or distribution switches and core switches.
This reference architecture has three logical networks:
The following figure shows the network components of the Red Hat OpenShift Container Platform and their logical architecture.
Figure 8. Red Hat OpenShift Container Platform logical/physical network connectivity
The four physical 25 GbE interfaces are bundled together in an 802.3ad link aggregation. The external network is presented as an 802.1Q VLAN tagged network from the upstream Dell EMC S5248F-ON switch pair, and the internal interface is presented as the untagged VLAN on the same interface pair to facilitate PxE booting. LACP fallback is enabled so that when the nodes are booting by PxE using the UEFI PxE module they can communicate with the provisioning system. This interface pair is labeled “Bond 0” in the diagram.
The recommended bonding options for the Linux are:
BONDING_OPTS=”mode=802.3ad miimon=100 xmit_has_policy=layer3+4 lacp_rate=1
All Red Hat OpenShift nodes are logically connected through the internal network, which means that they are all on the same layer-2 broadcast domain. In addition, Open vSwitch creates its own network for Red Hat OpenShift pod-to-pod communication. The OpenShift ovs-multitenant plugin allows only pods with the same project namespace to communicate. Keepalived manages a virtual IP address on three infrastructure hosts for external access to the Red Hat OpenShift Container Platform web console and applications.
If necessary, you can use an enterprise external load balancer (F5, NGINX, or other) as the ingress point for both the web console and OpenShift Router. An external load balancer may be needed for edge request routing and load balancing for applications deployed on the OpenShift cluster. The external load balancer might be already available in the target deployment location. For more information, see the Multiple Masters example in OpenShift Example Inventory Files.
To secure the maximum fail-safe high availability (HA) configuration, cable the NIC ports across HA ToR Switches, as shown in the following table:
Table 9. Recommended cabling
Server |
ToR Switch-1 |
ToR Switch-2 |
Port channel |
bastion |
em1 |
em2 |
1 |
master1 |
em1 |
em2 |
2 |
master2 |
em1 |
em2 |
3 |
master3 |
em1 |
em2 |
4 |
infra1 |
em1 |
em2 |
5 |
infra2 |
em1 |
em2 |
6 |
infra3 |
em1 |
em2 |
7 |
app1 |
em1 |
em2 |
8 |
app2 |
em1 |
em2 |
9 |
app3 |
em1 |
em2 |
10 |
app4 |
em1 |
em2 |
11 |
stor1 |
em1 |
em2 |
12 |
stor2 |
em1 |
em2 |
13 |
stor3 |
em1 |
em2 |
14 |
stor4 |
em1 |
em2 |
15 |