Home > Workload Solutions > Container Platforms > Red Hat OpenShift Container Platform > Archive > Design Guide—Red Hat OpenShift Container Platform 4.2 > Introduction to hardware design
This chapter describes node design options that enable you to build a cluster for a wide range of workload-handling capabilities, expanding on information that was introduced in Chapter 2, Technology Overview. In most cases, the platform design process ensures that your cluster can meet initial workloads. The cluster must also be capable of being scaled out as the demand for workload handling grows.
Specifying and building an on-premises Kubernetes cluster without guidance is a daunting task. Building that same cluster into a resilient, development-ready or production-ready container ecosystem can be challenging. Dell EMC Ready Stack for Red Hat OpenShift Container Platform 4.2 provides an easy way to erect and commission a Kubernetes cluster. However, if you want to match the design and capacity of your platform infrastructure to specific site needs, additional tasks are required.
Organizations that have successfully built and deployed on-premises container ecosystems understand their current container infrastructure and can usually best determine the initial configuration needs for their next venture into container operations. With this knowledge, it is easier to approach CPU sizing, memory configuration, network bandwidth capacity specification, and storage needs.
In the absence of a clear understanding of your workload and the resources you need, the following design information might help to explain the physical hardware requirements. Calculations from measured or assumed requirements are only a guide to real-world operational requirements. Many operational factors can impact how the complexity of a container ecosystem affects operational latencies. A good practice is to add a safety margin to all physical resource estimates. Dell EMC’s goal in providing this information is to help you get Day-2 operations underway as smoothly as possible.
Kubernetes and the platforms into which it is integrated have design-limited resource utilization capabilities. The following sections describe limits for Kubernetes 1.14 (the version that is used in OpenShift Container Platform 4.2) and the published limits for OpenShift Container Platform 4.2. These limits set the outer boundaries for node design for your container ecosystem.
Application pods (software) that run on a cluster can be scaled up until available physical cluster resources (CPU cores, memory, network bandwidth I/O, and storage I/O) are reached. Physical cluster resources can be oversubscribed in production use. Oversubscription affects the service-level performance of all application pods that are running on a node or across a cluster.
When work began on development of OpenShift Container Platform 4.2, the available Kubernetes release was version 1.14. The Kubernetes website lists the following cluster limits:
The design and architecture of Kubernetes places resource hosting limits on a Kubernetes cluster. Red Hat offers support for OpenShift Container Platform 4.2 up to these limits, as described in Planning your environment according to object limits:
Use this information when you design your container ecosystem.
This section describes how to size a Kubernetes-based container ecosystem cluster using a sample cloud-native application. The following table shows a cloud-native inventory management application with a customized quotation generation system workload. Estimated memory, CPU core, I/O bandwidth, and storage requirements are assumed as indicative of resource requirements at times of peak load.
Table 6. Estimated workload resource requirements by application type
Application type | Number of pods | Maximum memory (GB) | CPU cores | Typical IOPS: Kbps @ block size (KB) | Persistent storage (GB) |
Apache web app | 150 | 0.5 | 0.5 | 10 @ 0.5 | 1 |
Python-based app | 50 | 0.4 | 0.5 | 55 @ 0.5 | 1 |
JavaScript run-time | 220 | 1 | 1 | 80 @ 2.0 | 1 |
Database | 100 | 16 | 2 | 60 @ 8.0 | 15 |
Java-based tools | 110 | 1.2 | 1 | 25 @ 1.0 | 1.5 |
The overall resource requirements are: 630 pods, 630 CPU cores, 2,047 GB RAM,
1.9 TB storage, and 130 Gbps aggregate network bandwidth.
Our calculations using the workload information from Table 6 take the following considerations into account:
The following table provides estimates for the number of nodes that can be used to accommodate the projected workload in Table 6. The cluster might require 40, 27, or 14 worker nodes, depending on the design of the node. Field experience recommends caution in the use of estimates for production use.
Table 7. Calculated worker node alternate configurations based on Table 6 data
Worker node type | Required node quantity | Total CPU cores | Total RAM (GB) |
Intel Gold 4208 CPU, 192 GB RAM, 2 x 25 GbE NICs | 40 | 640 | 7,680 |
Intel Gold 6226 CPU, 384 GB RAM, 4 x 25 GbE NICs | 27 | 648 | 10,368 |
Intel Gold 6252 CPU, 768 GB RAM, 2 x 100 GbE NICs | 14 | 672 | 10,752 |
Dell EMC’s minimum recommended master node configuration is a PowerEdge R640 with dual Intel Gold 6226 CPUs and 192 GB RAM. As the Red Hat resource requirements show, this node is large enough for a 250-node cluster and higher. Because Dell EMC recommends that you do not scale beyond 250 nodes, the proposed reference design is adequate for nearly all deployments. The following table shows the sizing recommendations:
Table 8. Master node sizing guide
Number of worker nodes | CPU cores | Memory (GB) |
25 | 4 | 16 |
100 | 8 | 32 |
250 | 16 | 64 |