Home > Workload Solutions > Container Platforms > Red Hat OpenShift Container Platform > Archive > Design Guide—Red Hat OpenShift Container Platform 4.10 on Intel-powered Dell Infrastructure > Physical network design
OpenShift Container Platform 4.10 introduces advanced networking features to enable containers for high performance and monitoring. Dell networking products are designed for ease of use and to enable resilient network creation. Dell Technologies recommends designs that apply the following principles:
Container networking takes advantage of the high speed (25/100 GbE) network interfaces of the Dell server portfolio. Also, to meet network capacity requirements, pods can attach to more networks by using available CNI plug-ins.
Additional networks are useful when network traffic isolation is required. Networking applications such as Container Network Functions (CNFs) have control traffic and data traffic. These different types of traffic have different processing, security, and performance requirements.
Pods can be attached to the SR-IOV virtual function (VF) interface on the host system for traffic isolation and to increase I/O performance.
Dual homing means that each node that makes up the OpenShift cluster has at least two NICs, each connected to at least two switches. The switches require VLT connections so that they operate together as a single unit of connectivity to provide redundant data paths for network traffic. The NICs at each node and the ports that they connect to on each of the switches can be aggregated using bonding to assure HA operation.
A nonblocking fabric is required to meet the needs of the microservices data traffic. Dell Technologies recommends deploying a leaf-spine network.
OpenShift Container Platform 4.10 supports Service Mesh. Users can monitor container traffic by using Kiali and perform end-to-end tracing of applications by using Jaeger.
Each server that has many NIC options in the rack is connected to:
Our network design employs a VLT connection between the two leaf switches. In a VLT environment, all paths are active; therefore, it is possible to achieve high throughput while still protecting against hardware failures.
VLT technology allows a server to uplink multiple physical trunks into more than one PowerSwitch switch by treating the uplinks as one logical trunk. A VLT-connected pair of switches acts as a single switch to a connecting server. Both links from the bridge network can forward and receive traffic. VLT replaces Spanning Tree Protocol (STP) networks by providing both redundancy and full bandwidth utilization using multiple active paths.
The major benefits of VLT technology are:
The VLTi configuration in this design uses two 100 GbE ports between each ToR switch. The remaining 100 GbE ports can be used for high-speed connectivity to spine switches, or directly to the data center core network infrastructure.
You can scale container solutions by adding multiple compute nodes and storage nodes. A container cluster can have multiple racks of servers. To create a nonblocking fabric that meets the needs of the microservices data traffic, we used a leaf-spine network.
Layer 2 and Layer 3 leaf-spine topologies employ the following concepts:
Our design used dual leaf switches at the top of each rack. We employed VLT in the spine layer, which allows all connections to be active while also providing fault tolerance. As administrators add racks to the data center, leaf switches are added to each new rack.
The total number of leaf-spine connections is equal to the number of leaf switches multiplied by the number of spine switches. Administrators can increase the bandwidth of the fabric by adding connections between leaf switches and spine switches if the spine layer has capacity for the additional connections.
Layer 3 leaf-spine network
In a Layer 3 leaf-spine network, traffic is routed between leaf switches and spine switches. The boundary between Layer 3 and Layer 2 is at the leaf switches. Spine switches are never connected to each other in a Layer 3 topology. Equal cost multipath routing (ECMP) is used to load-balance traffic across the Layer 3 network. Connections within racks from hosts to leaf switches are Layer 2. Connections to external networks are made from a pair of edge or border leaf switches, as shown in the following figure:
Dell’s high-capacity network switches are cost-effective and easy to deploy. The switches provide a clear path to a software-defined data center, offering:
For our solution design, we used Dell Network Operating System OS10. OS10 allows multilayered disaggregation of network functions that are layered on an open-source Linux-based operating system.
This solution uses the following HA features:
Keepalived is an open-source project that implements routing software using the Virtual Router Redundancy Protocol (VRRP). VRRP enables a switchover to a backup server if the primary server fails. This switchover is achieved by using VIP.
To configure keepalived on both servers:
Always make external traffic paths highly available to create a complete solution. The cluster administrator can use an external L4 load-balancer in a highly available manner or deploy HAProxy in resilient mode. Deploying a resilient HAProxy requires one additional server. As shown in the following figure, the components of a highly available load-balancer design using HAProxy are:
For both components, configure VIP on a suitable network interface.