Home > Workload Solutions > Container Platforms > Red Hat OpenShift Container Platform > Archive > Design Guide—Red Hat OpenShift Container Platform 4.8 on Dell Infrastructure > Infrastructure requirements
OpenShift Container Platform 4.8 supports two cluster deployment types: a three-node cluster and a standard cluster (more than five nodes).
In a three-node cluster, the control-plane and cluster workloads are run on the same nodes, allowing for small footprint deployments of OpenShift Container Platform for testing, development, and production environments. While the three-node cluster can be expanded with additional compute nodes, an initial expansion requires that at least two compute nodes be added simultaneously. This is because the ingress networking controller deploys two router pods on compute nodes for full functionality. If compute nodes are added to a three-node cluster, deploy two compute nodes for full ingress functionality. You can add compute nodes later as needed.
A standard cluster deployment has three control-plane nodes and at least two compute nodes. With this deployment type, control-plane nodes are marked as “unschedulable,” preventing cluster workloads from being scheduled on those nodes.
Both cluster deployment types require two CSAH nodes for cluster management and resilient load-balancing.
The following table provides basic cluster infrastructure guidance for validated hardware configurations. For detailed configuration information, see Cluster Hardware Design. A container cluster can be deployed quickly and reliably when each node is within the validated design guidelines.
Type | Description | Count | Notes |
CSAH node | PowerEdge R6525 server | 2 | Creates a bootstrap VM. CSAH runs a single instance of HAProxy. For an enterprise high availability (HA) deployment of OpenShift Container Platform 4.8, Dell Technologies recommends using a commercially supported L4 load-balancer or proxy service, or an additional PowerEdge R6525 CSAH node running HAProxy and KeepAlived alongside the primary CSAH node. Options include commercial HAProxy, Nginx, and F5. |
Control-plane nodes | PowerEdge R6525 server | 3 | Deployed using the bootstrap VM. |
Compute nodes | PowerEdge R6525 or R7525 server | A minimum of 2* per rack, maximum 30 | No compute nodes are required for a three-node cluster. A standard deployment requires a minimum of two compute nodes (and three controller nodes). To expand a three-node cluster, you must add two compute nodes simultaneously. After the cluster is operational, you can add more compute nodes to the cluster through the Cluster Management Service. |
Data switches | Either of the following switches:
| 2 per rack | Configured at installation time. Note:
|
iDRAC network | Dell PowerSwitch S3148-ON | 1 per rack | Used for OOB management. |
Rack | Selected according to site standards | 1–3 racks | For multirack configurations, consult your Dell Technologies or Red Hat representative regarding custom engineering design. |
*A three-node cluster does not require any compute nodes. To expand a three-node cluster with additional compute machines, you must first expand the cluster to a five-node cluster using two additional compute nodes.
The RHCOS nodes must fetch ignition files from the Machine Config server. This operation uses an initramfs-based-node startup for the initial network configuration. The startup requires a DHCP server to provide a network connection that gives access to the ignition files for that node. Subsequent operations can use static IP addresses.