Home > Workload Solutions > Container Platforms > Red Hat OpenShift Container Platform > Archive > Design Guide—Dell Ready Stack for Red Hat OpenShift Container Platform 4.3 CSI Attached Storage > Physical network design
Dell EMC networking products are designed for ease of use and to enable resilient network creation. OpenShift Container Platform 4.3 introduces various advanced networking features to enable containers for high performance and monitoring. Our recommended design applies the following principles:
Container networking takes advantage of the high speed (25/100 GbE) network interfaces of the Dell Technologies server portfolio. In addition, to meet network capacity requirements, pods can attach to more networks using available CNI plug-ins.
Additional networks are useful when network traffic isolation is required. Networking applications such as Container Network Functions (CNFs) facilitate control traffic and data traffic. These different traffic types have different processing, security, and performance requirements.
Pods can be attached to the SR-IOV virtual function (VF) interface on the host system for traffic isolation and to increase I/O performance.
Dual-homing means that each node that makes up the OpenShift cluster has at least two NICs, each connected to at least two switches. The switches require VLT connections so that they operate together as a single unit of connectivity to provide redundant data paths for all network traffic. The NICs at each node and the ports they connect to on each of the switches can be aggregated using bonding to assure HA operation.
A nonblocking fabric is required to meet the needs of the microservices data traffic. Dell Technologies recommends deploying a leaf-spine network.
OpenShift Container Platform 4.3 supports Service Mesh. Users can monitor container traffic by using Kiali and perform end-to-end tracing of applications by using Jaeger.
With many network adapter options in the rack, each server is connected to:
Our network design employs a VLT connection between the two leaf switches. In a VLT environment, all paths are active; therefore, it is possible to achieve high throughput while still protecting against hardware failures.
VLT technology allows a server to uplink multiple physical trunks into more than one Dell EMC PowerSwitch switch by treating the uplinks as one logical trunk. A VLT-connected pair of switches acts as a single switch to a connecting server. Both links from the bridge network can actively forward and receive traffic. VLT provides a replacement for Spanning Tree Protocol (STP)-based networks by providing both redundancy and full bandwidth utilization using multiple active paths.
The major benefits of VLT technology are:
The VLTi configuration in this design uses two 100 GbE ports between each ToR switch. The remainder of the 100 GbE ports can be used for high-speed connectivity to spine switches or directly to the data center core network infrastructure.
You can scale container solutions by adding multiple worker nodes and storage nodes. A container cluster can have multiple racks of servers. To create a nonblocking fabric that meets the needs of the microservices data traffic, we used a leaf-spine network.
Layer 2 and Layer 3 leaf-spine topologies employ the following concepts:
Our design used dual leaf switches at the top of each rack. We employed VLT in the spine layer, which allows all connections to be active while also providing fault tolerance. As administrators add racks to the data center, leaf switches are added to each new rack.
The total number of leaf-spine connections is equal to the number of leaf switches multiplied by the number of spine switches. Administrators can increase the bandwidth of the fabric by adding connections between leaves and spines if the spine layer has capacity for the additional connections.
Layer 3 leaf-spine network
In a Layer 3 leaf-spine network, traffic is routed between leaves and spines. The Layer 3-Layer 2 boundary is at the leaf switches. Spine switches are never connected to each other in a Layer 3 topology. Equal cost multipath routing (ECMP) is used to load-balance traffic across the Layer 3 network. Connections within racks from hosts to leaf switches are Layer 2. Connections to external networks are made from a pair of edge or border leaves, as shown in the following figure:
Figure 3. Leaf-spine network configuration
Dell’s high-capacity network switches are cost-effective and easy to deploy. The switches provide a clear path to a software-defined data center and offer:
We used Dell EMC Network Operating System OS10 for our solution design. The OS10 allows multilayered disaggregation of network functions that are layered on an open-source Linux-based operating system. The following section describes a high-level configuration of the PowerSwitch switches that are used for an OpenShift Container Platform deployment at various scales.
The VLT configuration includes the following high-level steps:
Dell EMC Networking modules are supported in Ansible core from Ansible 2.3 and later. You can use these modules to manage and automate Dell EMC switches running OS10. The modules run in local connection mode using CLI and SSH transport.
For an example of CLOS fabric deployment based on the Border Gateway Protocol (BGP), see Provision CLOS fabric using Dell EMC Networking Ansible modules example.
This solution uses the following HA features:
Always make external traffic paths highly available to create a complete solution. The cluster administrator can use an external L4 load-balancer in a highly available manner or deploy HAProxy in resilient mode. Deploying HAProxy requires one additional server. As shown in the following figure, the components of a highly available load-balancer design using HAProxy are:
Figure 4. Highly available load-balancing
Keepalived is an open-source project that implements routing software using the Virtual Router Redundancy Protocol (VRRP). VRRP allows a switchover to a backup server if the primary server fails. This switchover is achieved by using VIP. To configure keepalived on both servers:
Configure DNS to use the VIP address.
This solution requires the following configuration features to ensure that the OpenShift cluster can discover the storage arrays. Ensure that the switch supports the necessary features to configure the storage arrays:
Note: This solution was validated with Brocade 6510 FC Switch. For sample configurations, see the Dell-ESG Github page.
Note: For detailed configuration information and steps, see the Deployment Guide.
Note: See the PowerMax and Isilon guides for detailed usage of these arrays.