Home > Workload Solutions > Container Platforms > Red Hat OpenShift Container Platform > Guides > Design Guide—Red Hat OpenShift Container Platform 4.14 on AMD-powered Dell Infrastructure > Validated hardware configuration options
The Dell OpenShift team used various server configurations for validation testing of the DVD for OpenShift Container Platform 4.14. Dell Technologies recommends selecting server configurations that are known to provide a satisfactory deployment experience and meet or exceed Day-2 operating experience expectations. This chapter provides guidelines for processor selection, memory configuration, local (on-server) disk storage, and network configuration.
The 4th generation AMD EPYC™ Processor family provides performance, advanced reliability, and hardware-enhanced security for demanding compute, network, and storage workloads. Dell Technologies recommends AMD EPYC™ 9354 Processors for PowerEdge servers. While many sites prefer to use a single-server configuration for all node types, that option is not always cost-effective or practical. When selecting a processor, take account of the following requirements:
When ordering and configuring your solution, refer to the following documentation:
Modify the memory configuration as necessary to meet your budgetary constraints and operating needs. Also consult OpenShift architectural guidance and consider your own observations from running your workloads on OpenShift Container Platform 4.14.
Disk drive performance significantly limits the performance of many aspects of OpenShift cluster deployment and operation. The DVD drive selection is based on a comparison of cost per GB of capacity divided by observed performance criteria such as cluster deployment time and application deployment characteristics and performance. Over time, users gain insight into the capacities that best enable them to meet their requirements.
Your selection of switches for the OpenShift Container Platform cluster infrastructure must take account of the network switches, the overall balance of I/O pathways within server nodes, and the NICs for your cluster. When you choose to include high-I/O bandwidth drives as part of your platform, ensure that sufficient network I/O is available to support high-speed, low-latency drives, as follows:
The following table provides guidance for ensuring adequate I/O bandwidth and taking advantage of available disk I/O bandwidth:
NIC selection | Compute node storage device type |
2 x 10 GbE | Spinning magnetic media (hard drive) |
2 x 25 GbE or 4 x 25 GbE | SATA or SAS SSD drives |
4 x 25 GbE or 2 x 100 GbE | NVMe SSD drives |
True network HA fail-safe design requires that each NIC is duplicated, permitting a pair of ports to be split across two physically separated switches. A pair of PowerSwitch S5248F-ON switches provides 96 x 25 GbE ports, enough for approximately 20 servers. This switch is cost-effective for a compact cluster. For a larger cluster, consider using PowerSwitch S5232F-ON switches. You can also choose to add another pair of S5248F-ON switches to scale the cluster to a full rack.
The PowerSwitch S5232F-ON provides 32 x 100 GbE ports. When used with a four-way QSFP28 to SFP28, a pair of these switches provides up to 256 x 25 GbE endpoints, more than enough for a rackful of servers in the cluster before more complex network topologies are required.
Dell Technologies strongly recommends that all servers are equipped with redundant power supplies and that power cabling provides redundant power to the servers. Configure each rack with pairs of power distribution units (PDUs). For consistency, connect all right-most power supply units (PSUs) to a right-side PDU, and all left-most PSUs to a left-side PDU. Use as many PDUs as you need, in pairs. Each PDU must have an independent connection to the data center power bus.
The following figure shows an example of the power configuration that is designed to ensure a redundant power supply for each cluster device: