Home > Workload Solutions > Container Platforms > Red Hat OpenShift Container Platform > Archive > Design Guide—Red Hat OpenShift Container Platform 4.2 > Infrastructure requirements
The following table provides basic cluster infrastructure guidance. For detailed information about configuration, see Chapter 5, Hardware Design. Node design guidance is your key to building a container ecosystem cluster that can be deployed quickly and reliably as long as each node is within the validated design guidelines.
Table 2. Hardware infrastructure for OpenShift Container Platform 4.2 cluster deployment
Type |
Description |
Count |
Notes |
CSAH node |
Dell PowerEdge R640 |
1 |
Creates a bootstrap node. The bootstrap node is later converted to a worker node. |
Master nodes |
Dell PowerEdge R640 |
3 |
Deployed by the bootstrap node. |
Worker nodes |
Dell PowerEdge R640 or R740xd |
Minimum 3, maximum 30 per rack |
Initially deployed by the bootstrap node, then later deployed by the Cluster Management Service. |
Storage nodes* |
Dell PowerEdge R640 or R740xd |
Minimum 3 |
Might be used to deploy OpenShift Container Storage 4.3 (a future release). |
Data switches |
Either of the following switches:
|
1 or 2 |
Autoconfigured at installation time. Note: HA network configuration requires 2 data path switches per rack. Note: Multi-rack clusters require careful network topology planning. Leaf/spine network switch configuration might be necessary. |
iDRAC network |
Dell PowerSwitch S3048-ON |
1 |
Used for OOB management. |
Rack |
Selected according to site standards |
1 |
For multirack configurations, consult Dell EMC or Red Hat for custom engineering design. |
*This information is included to provide context for the upcoming release of OpenShift Container Platform 4.3. The 4.3 release might include Ceph-based OpenShift Container Storage that is designed for use within the cluster infrastructure. Container Storage can also be used for application data, although the use of managed and protected external storage is generally preferred for non-infrastructure application use.
Installing OpenShift Container Platform requires, at a minimum, the following nodes:
Note: Dell EMC Ready Stack for Red Hat OpenShift Container Platform 4.2 does not currently support redundant network configuration because of technical issues that we discovered during our development work. These issues will likely be resolved by the time Red Hat releases Open Shift Container Platform 4.3. We therefore recommend that all servers are provisioned with dual network adapters at a minimum.
The HA of key services that make up your cluster is necessary to ensure run-time integrity. The use of separate physical nodes for each cluster node type is foundational to the design guidance that is provided for your bare-metal cluster. As used in this guide, HA includes the provisioning of at least dual network adapters and dual network switches that are configured to provide redundant pathing. The redundant pathing provides for network continuity if a network adapter or a network switch fails.
OpenShift Container Platform 4.2 is supported on Red Hat Enterprise Linux 7.6 and later, as well as on Red Hat Enterprise Linux CoreOS 4.1. You must use Red Hat Enterprise Linux CoreOS (RHCOS) for the control plane (or master) machines and can use either RHCOS or Red Hat Enterprise Linux 7.6 for compute (or worker) machines. The bootstrap and master nodes must use RHCOS as their operating system. Each of these nodes must be immutable.
The following table shows the minimum resource requirements for the OpenShift Container Platform 4.2 nodes:
Table 3. Minimum resource requirements for OpenShift Container Platform 4.2 nodes
Node type |
Operating system |
Minimum CPU cores |
RAM |
Storage |
CSAH |
Red Hat Enterprise Linux 8 |
4 |
64 GB |
200 GB |
Bootstrap |
RHCOS 4.2 |
4 |
16 GB |
120 GB |
Master |
RHCOS 4.2 |
4 |
16 GB |
120 GB |
Worker |
RHCOS 4.2 or Red Hat Enterprise Linux 7.6 |
2 |
8 GB |
120 GB |
The RHCOS nodes must fetch ignition files from the Machine Config server. This operation makes use of initial network configuration using an initramfs-based-node startup. The initial boot requires a DHCP server to provide a network connection to give access to the ignition files for that node. Static IP addresses can be assigned for subsequent operations.