In OpenShift Container Platform 4.6, two different types of cluster deployments are available: a three-node cluster and a standard cluster (more than five nodes). In a three-node cluster, the control plane and cluster workloads are run on the same nodes, allowing for small footprint deployments of OpenShift for testing, development, and production environments. While the three-node cluster can be expanded with additional compute nodes, an initial expansion of a three-node cluster requires that at least two compute nodes be added simultaneously. This is because the ingress networking controller deploys two router pods on compute nodes for full functionality. If compute nodes are added to a three-node cluster, deploy two compute nodes for full ingress functionality. You can add compute nodes subsequently as needed. A standard cluster deployment has three control-plane nodes and at least two compute nodes. With this deployment type, control-plane nodes are marked as “unschedulable,” preventing cluster workloads from being scheduled on those nodes. Both cluster deployment types require two CSAH nodes for cluster management and resilient load-balancing.
The following table provides basic cluster infrastructure guidance for validated hardware configurations. For detailed configuration information, see Cluster Hardware Design. A container cluster can be deployed quickly and reliably when each node is within the validated design guidelines.
Table 5. Hardware infrastructure for OpenShift Container Platform 4.6 cluster deployment
Type |
Description |
Count |
Notes |
CSAH node |
Dell PowerEdge R640/R650 server |
2 |
Creates a bootstrap VM. CSAH runs a single instance of HAProxy. For an enterprise high availability (HA) deployment of OpenShift Container Platform 4.6, Dell Technologies recommends using a commercially supported L4 load-balancer or proxy service, or an additional PowerEdge R640 CSAH node running HAProxy and KeepAlived alongside the primary CSAH node. Options include commercial HAProxy, Nginx, and F5. |
Control-plane nodes |
PowerEdge R640/R650 server |
3 |
Deployed using the bootstrap VM. |
Compute nodes |
PowerEdge R640 or R740xd server PowerEdge R650 or R750 server |
A minimum of 2* per rack, maximum 30 |
No compute nodes are required for a three-node cluster. A standard deployment requires a minimum of two compute nodes (and three controller nodes). To expand a three-node cluster, you must add two compute nodes simultaneously. After the cluster is operational, you can add more compute nodes to the cluster through the Cluster Management Service. |
Data switches |
Either of the following switches:
|
2 per rack |
Configured at installation time. Note:
|
iDRAC network |
Dell PowerSwitch S3048-ON |
1 per rack |
Used for OOB management. |
Rack |
Selected according to site standards |
1–3 racks |
For multirack configurations, consult your Dell Technologies or Red Hat representative regarding custom engineering design. |
*A three-node cluster does not require any compute nodes. To expand a three-node cluster with additional compute machines, you must first expand the cluster to a five-node cluster using two additional compute nodes.
Installing OpenShift Container Platform requires, at a minimum, the following nodes:
HA of the key services that make up the OpenShift Container Platform cluster is necessary to ensure run-time integrity. Redundancy of physical nodes for each cluster node type is an important aspect of HA for the bare-metal cluster.
In this design guide, HA includes the provisioning of at least two network interface controllers (NICs) and two network switches that are configured to provide redundant pathing. The redundant pathing provides for network continuity if a NIC or a network switch fails. HA load-balancing can be provided by using an enterprise-grade load balancer or an additional PowerEdge R640 server running HAProxy and KeepAlived alongside the CSAH node.
OpenShift Container Platform 4.6 must use Red Hat Enterprise Linux CoreOS 4.6 (RHCOS) for the control-plane nodes and compute nodes.
Note: Using Red Hat Enterprise Linux 7 compute nodes is now deprecated and the ability to use them in OpenShift will be removed in a future release of OpenShift. For that reason, this design guide and its companion implementation guide no longer leverage Red Hat Enterprise Linux 7 compute nodes. The bootstrap and control-plane nodes must use RHCOS 4.6 as their operating system. Each of these nodes must be immutable.
The following table shows the minimum resource requirements for the nodes:
Table 6. Minimum resource requirements for OpenShift Container Platform 4.6 nodes
Node type |
Operating system |
Minimum CPU cores |
RAM |
Storage |
CSAH |
Red Hat Enterprise Linux 8.2 |
4 |
32 GB |
200 GB |
Bootstrap |
RHCOS 4.6 |
4 |
16 GB |
120 GB |
Controller |
RHCOS 4.6 |
4 |
16 GB |
120 GB |
Compute |
RHCOS 4.6
|
2 |
8 GB |
120 GB |
The RHCOS nodes must fetch ignition files from the Machine Config server. This operation uses an initramfs-based-node startup for the initial network configuration. The startup requires a DHCP server to provide a network connection that gives access to the ignition files for that node. Subsequent operations can use static IP addresses.