Home > Workload Solutions > Container Platforms > Red Hat OpenShift Container Platform > Archive > Design Guide—Red Hat OpenShift Container Platform 4.2 > Validated hardware configuration options
We used various server configurations for the Dell EMC Ready Stack for Red Hat OpenShift Contain Platform. Dell EMC recommends selecting server configurations that are known to provide both a satisfactory deployment experience and to meet or exceed Day-2 operating experience expectations. This section provides guidelines for Intel microprocessor selection, memory configuration, local (on-server) disk storage, and network configuration.
While it is tempting to minimize container ecosystem node costs, as the size of the cluster expands over time, higher overall hardware and operating costs might result. As Table 7 shows, the lower-cost node configuration requires 40 servers to meet workload requirements while the higher-cost configuration requires 14 servers. Therefore, the lower-cost configuration is approximately 35 percent more. The total cost of the servers that are necessary to meet workload requirements is nearly double that of the higher performing servers. Higher density computing is generally the most prudent choice.
The Intel Xeon Gold processor family provides performance, advanced reliability, and hardware-enhanced security for demanding compute, network, and storage workloads.
Dell EMC recommends Intel Xeon Gold series CPUs in the range of the 6226 to 6252 models. This selection is based on experience gained from deployment and operation of OpenShift Container Platform 4.2 running on Dell EMC PowerEdge R640 and R740xd servers. The design information in this document is based on clusters of servers with either Intel Gold 6240 or Intel Gold 6238 processors.
When selecting a processor, consider the following recommendations:
When ordering and configuring your PowerEdge servers, see the Dell EMC PowerEdge R640 Technical Guide and Dell EMC PowerEdge R740 and R740xd Technical Guide. For CPU information, see Intel Xeon Gold Processors.
The Dell EMC engineering team designated 192, 384, or 768 GB RAM as the best choice based on memory usage, DIMM module capacity for the current cost, and likely obsolescence during the server life cycle. We chose a mid-range memory configuration of 384 GB RAM to ensure that the memory for each CPU has multiples of three banks of DIMM slots populated to ensure maximum memory-access cycle speed. You can alter the memory configuration to meet your budgetary constraints and operating needs.
Consult OpenShift architectural guidance and consider your own observations from running your workloads on the Openshift Container Platform 4.2. For important guidance regarding server memory population (location of DIMM modules in DIMM slots), particularly the use of the firmware setting for Performance Optimized mode, see Dell EMC PowerEdge–14G Memory Population Rules updated for certain server's configurations in the Dell EMC Knowledge Base.
The performance of disk drives significantly limits the performance of many aspects of OpenShift cluster deployment and operation. The Dell EMC engineering team validated deployment and operation of OpenShift Container Platform using magnetic storage drives (spinners), SATA SSD drives, SAS SSD drives, and NVMe SSD drives.
Our selection of all NVMe SSD drives was based on a comparison of cost per GB of capacity divided by observed performance criteria such as deployment time for the cluster, application deployment characteristics, and application performance. There are no universal guidelines, but over time users gain insight into the capacities that best enable them to meet their requirements. Optionally, you can deploy the cluster with only HDD disk drives. This configuration has been tested and shown to have few adverse performance consequences.
When selecting the switches to include in the OpenShift Container Platform cluster infrastructure, consider the overall balance of I/O pathways within your server nodes, the network switches, and the NICs for your cluster. When you choose to include high-I/O bandwidth drives as part of your platform, consider your choice of network switches and NICs so that adequate network I/O is available to support high-speed, low-latency drives:
The following table provides information about selecting NICs to ensure adequate I/O bandwidth and to take advantage of available disk I/O:
Table 11. NIC selection to optimize I/O bandwidth
NIC selection |
Worker node storage device type |
2 x 25 GbE |
Spinning magnetic media (HDD). |
2 x 25 GbE or 4 x 25 GbE |
SATA or SAS SSD drives. |
4 x 25GbE or 2 x 100 GbE |
NVMe SSD drives. |
True network HA fail-safe design demands that each NIC is duplicated, permitting a pair of ports to be split across two physically separated switches. A pair of PowerSwitch S5248F-ON switches provides 96 x 25 GbE ports, enough for a total of approximately 20 servers. This switch is cost-effective for a compact cluster. While you could add another pair of S5248F-ON switches to scale the cluster to a full rack, consider using PowerSwitch S5232F-ON switches for a larger cluster.
The PowerSwitch S5232F-ON provides 32 x 100 Gbe ports. When used with a 4-way QSFP28 to SFP28, a pair of the switches provides up to 256 x 25 GbE endpoints, more than enough for a rack full of servers in the cluster before more complex network topologies are required.
Note: Dell EMC recommends purchasing servers with enough network (NIC) ports to accommodate near-future deployment needs. This design guide does not address the deployment of multi-NIC teaming (bonding) because of issues experienced during our validation test work. We expect these issues to be fully resolved by the time that OpenShift Container Platform 4.3 is released. Therefore, we recommend provisioning your server nodes and switches to fully enable HA network deployment from the outset.
NFV-centric data centers require low latency in all aspects of container ecosystem design for application deployment. This requirement means that you must give particular attention to selecting low-latency components throughout the OpenShift cluster. Dell EMC strongly recommends using only NVMe drives, NFV-centric versions of Intel CPUs, and, at a minimum, the Dell EMC PowerSwitch S5232F-ON switch. Consult the Dell EMC Service Provider support team for specific guidance.
Dell EMC strongly recommends that all servers be equipped with redundant power supplies and that power cabling provides redundant power to the servers. Configure each rack with pairs of power distribution units (PDUs). For consistency, connect all right-most power supply units (PSUs) to a right-side PDU and all left-most PSUs to a left-side PDU. Use as many PDUs as you need, in pairs. Each PDU must have an independent connection to the data center power bus.
The following figure shows an example of the power configuration that is designed to assure redundant power supply to each cluster device.
Figure 8. PSU to PDU power template