Home > Workload Solutions > Container Platforms > Red Hat OpenShift Container Platform > Guides > Design Guide—Red Hat OpenShift Container Platform 4.14 on AMD-powered Dell Infrastructure > Physical network design
OpenShift Container Platform 4.14 introduces advanced networking features to enable containers for high performance and monitoring. Dell Technologies recommends designs that incorporate the following principles:
Container networking takes advantage of the high speed (25/100 GbE) network interfaces of the Dell server portfolio. To meet network capacity requirements, pods can attach to more networks by using available CNI plug-ins.
Additional networks are useful when network traffic isolation is required. Networking applications such as Container Network Functions (CNFs) have control traffic and data traffic. These different types of traffic have different processing, security, and performance requirements.
Pods can be attached to the SR-IOV virtual function (VF) interface on the host system for traffic isolation and to increase I/O performance.
Dual homing means that each node that makes up the OpenShift cluster has at least two NICs, each connected to at least two switches. The switches require VLT connections so that they operate together as a single unit of connectivity to provide redundant data paths for network traffic. The NICs at each node and the ports that they connect to on each of the switches can be aggregated using bonding to assure HA operation.
A nonblocking fabric is required to meet the needs of the microservices data traffic. Dell Technologies recommends deploying a leaf-spine network.
Dell networking products are designed to enable resilient network creation. Each server that has many NIC options in the rack is connected to:
This DVD employs a VLT connection between the two leaf switches. In a VLT environment, all paths are active and therefore it is possible to achieve high throughput while still protecting against hardware failures.
VLT technology allows a server to uplink multiple physical trunks into more than one PowerSwitch switch by treating the uplinks as one logical trunk. A VLT-connected pair of switches acts as a single switch to a connecting server. Both links from the bridge network can forward and receive traffic. VLT replaces Spanning Tree Protocol (STP) networks by providing both redundancy and full bandwidth utilization using multiple active paths.
The major benefits of VLT technology are:
The VLTi configuration in this DVD uses two 100 GbE ports between each ToR switch. The remaining 100 GbE ports can be used for high-speed connectivity to spine switches, or directly to the data center core network infrastructure.
You can scale container solutions by adding multiple compute nodes and storage nodes. A container cluster can have multiple racks of servers. To create a nonblocking fabric that meets the needs of the microservices data traffic, the Dell OpenShift team used a leaf-spine network.
Layer 2 and Layer 3 leaf-spine topologies employ the following concepts:
This DVD uses dual leaf switches at the top of each rack. The Dell OpenShift validation team employed VLT in the spine layer, allowing all connections to be active while also providing fault tolerance. As administrators add racks to the data center, leaf switches are added to each new rack.
The total number of leaf-spine connections is equal to the number of leaf switches multiplied by the number of spine switches. Administrators can increase the bandwidth of the fabric by adding connections between leaf switches and spine switches if the spine layer has capacity for additional connections.
Layer 3 leaf-spine network
In a Layer 3 leaf-spine network, traffic is routed between leaf switches and spine switches. Spine switches are never connected to each other in a Layer 3 topology. The boundary between Layer 3 and Layer 2 is at the leaf switches. Equal cost multipath routing (ECMP) is used to load-balance traffic across the Layer 3 network. Connections within racks from hosts to leaf switches are Layer 2. Connections to external networks are made from a pair of edge or border leaf switches, as shown in the following figure:
The high-capacity network switches from Dell Technologies are cost-effective and easy to deploy. The switches provide a clear path to a software-defined data center, offering:
This solution uses the following HA features: