Home > Workload Solutions > Container Platforms > Red Hat OpenShift Container Platform > Archive > Design Guide—Red Hat OpenShift Container Platform 4.10 on Intel-powered Dell Infrastructure > Infrastructure requirements
In OpenShift Container Platform 4.10, two different cluster deployment types are available:
While the three-node cluster can be expanded with additional compute nodes, an initial expansion of a three-node cluster requires that at least two compute nodes be added simultaneously. This is because the ingress networking controller deploys two router pods on compute nodes for full functionality. If you are adding compute nodes to a three-node cluster, deploy two compute nodes for full ingress functionality. You can add compute nodes later as needed.
SNO bundles both control-plane and data-plane capabilities into a single server and provides users with a consistent experience across the sites where OpenShift is deployed.
OpenShift Container Platform on a single node is a specialized installation that requires the creation of a special ignition configuration ISO. The primary use case for SNO is edge computing workloads. The main tradeoff with a single node is the lack of HA.
The following table provides basic cluster infrastructure guidance for validated hardware configurations. For detailed configuration information, see Cluster Hardware Design. A container cluster can be deployed quickly and reliably when each node is within the validated design guidelines.
Type | Description | Count | Notes |
CSAH node | Dell PowerEdge R650 server Dell PowerEdge XR11 (in a single-node deployment) | 2 | Creates a bootstrap VM. CSAH runs a single instance of HAProxy. For an enterprise HA deployment of OpenShift Container Platform 4.10, Dell Technologies recommends using a commercially supported L4 load-balancer or proxy service, or an additional PowerEdge R650 CSAH node running HAProxy and KeepAlived alongside the primary CSAH node. Options include commercial HAProxy, Nginx, and F5. For SNO, it is used as a DNS server, DHCP server, and administration host. |
Control-plane nodes | Dell PowerEdge R650/R750 server | 3 | Deployed using the bootstrap VM. |
Compute nodes | Dell PowerEdge R650 or R750 server Dell PowerEdge R750xa server | A minimum of 2* per rack, maximum 30 | No compute nodes are required for a three-node cluster. A standard deployment requires a minimum of two compute nodes (and three controller nodes). To expand a three-node cluster, you must add two compute nodes simultaneously. When the cluster is operational, you can add more compute nodes to the cluster. |
Single-node OpenShift | Dell PowerEdge XR11 or XR12 server Dell PowerEdge R650 or R750 server | 1 | Edge-optimized server options for deploying single-node OpenShift. |
Network switches | Either of the following switches:
| 2 per rack | Configured at installation time. Note:
|
OOB network | Dell PowerSwitch S3148-ON | 1 per rack | Used for iDRAC management. |
Rack | Selected according to site standards | 1 to 7 racks | For multirack configurations, consult your Dell Technologies or Red Hat representative regarding custom engineering design. |
*A three-node cluster does not require any compute nodes. To expand a three-node cluster with additional compute machines, you must first expand the cluster to a five-node cluster using two additional compute nodes.
Installing OpenShift Container Platform requires, at a minimum, the following nodes:
In a single-node OpenShift deployment, a bootstrap VM is not required and the CSAH serves as an admin host.
HA of the key services that make up the OpenShift Container Platform cluster is necessary to ensure run-time integrity. Redundancy of physical nodes for each cluster node type is an important aspect of HA for the bare-metal cluster. In this guide, HA includes the provisioning of at least two network interface controllers (NICs) and two network switches that are configured to provide redundant pathing. The redundant pathing provides for network continuity if a NIC or a network switch fails. HA load-balancing can be provided by using an enterprise-grade load balancer or an additional PowerEdge R650 server running HAProxy and KeepAlived alongside the CSAH node.