Home > Communication Service Provider Solutions > Converged Core > Guides > Dell Technologies 5G Core Solution with Affirmed and Red Hat OpenShift Container Platform Ref Arch Guide > Infrastructure requirements
In OpenShift Container Platform 4.6, two different types of cluster deployments are available: a three-node cluster and a standard cluster (5+ nodes). In a three-node cluster, the control plane and cluster workloads run on the same nodes, enabling small-footprint deployments of OpenShift for testing, development, and production environments. While the three-node cluster can be expanded with additional compute nodes, an initial expansion of a three-node cluster requires the addition of at least two compute nodes at the same time. This step is mandatory because the ingress networking controller deploys two router pods on compute nodes for full functionality. If compute nodes are added to a three-node cluster, deploy two compute nodes for full ingress functionality. You can add more compute nodes subsequently as needed. A standard cluster deployment has three control-plane nodes and at least two compute nodes. With this deployment type, control-plane nodes are marked as “unschedulable,” which prevents cluster workloads from being scheduled on those nodes. Both cluster deployment types require two CSAH nodes for cluster management and resilient load-balancing.
A container cluster can be deployed quickly and reliably when each node is within the validated design guidelines. The following table provides basic cluster infrastructure guidance.
Type |
Description |
Number |
Notes |
CSAH node |
Dell EMC PowerEdge R640 server |
2 |
Creates a bootstrap VM. CSAH runs a single instance of HAProxy. For an enterprise high availability (HA) deployment of OpenShift Container Platform 4.6, Dell Technologies recommends using a commercially supported L4 load-balancer or proxy service, or an additional PowerEdge R640 CSAH node running HAProxy and Keepalived alongside the primary CSAH node. Options include commercial HAProxy, Nginx, and F5. |
Control-plane nodes |
Dell EMC PowerEdge R640 server |
3 |
Deployed using the bootstrap VM. |
Compute nodes |
Dell EMC PowerEdge R640 or R740xd server |
Minimum 2* per rack, maximum 30 |
No compute nodes are required for a three-node cluster. A standard deployment requires a minimum of two compute nodes (and three controller nodes). To expand a three-node cluster, you must add two compute nodes at the same time. After the cluster is operational, you can add more compute nodes to the cluster through the Cluster Management Service. |
Data switches |
Either of the following switches: Dell EMC PowerSwitch S5248-ON Dell EMC PowerSwitch S5232-ON |
2 per rack |
Configured at installation time. Note: HA network configuration requires two data path switches per rack. Multirack clusters require network topology planning. Leaf-spine network switch configuration may be necessary. |
iDRAC network |
Dell EMC PowerSwitch S3048-ON |
1 per rack |
Used for OOB management. |
Rack |
Selected according to site standards |
1–3 racks |
For multirack configurations, consult your Dell Technologies or Red Hat representative regarding custom engineering design. |
*A three-node cluster does not require any compute nodes. To expand a three-node cluster with additional compute machines, first expand the cluster to a five-node cluster using two additional compute nodes.