Home > Workload Solutions > Container Platforms > Red Hat OpenShift Container Platform > Archive > Design Guide—Dell Ready Stack for Red Hat OpenShift Container Platform 4.3 CSI Attached Storage > Requirements planning
This section describes how to size an OpenShift-based container ecosystem cluster by using a cloud-native application. The following table shows a cloud-native inventory management application with a customized quotation generation system workload. Estimated memory, CPU core, I/O bandwidth, and storage requirements are indicative of resource requirements at times of peak load.
Table 4. Estimated workload resource requirements by application type
Application type |
Number of pods |
Maximum memory (GB) |
CPU cores |
Typical IOPS: Kb/s @ block size (KB) |
Persistent storage (GB) |
Apache web application |
150 |
0.5 |
0.5 |
10 @ 0.5 |
1 |
Python-based application |
50 |
0.4 |
0.5 |
55 @ 0.5 |
1 |
JavaScript runtime |
220 |
1 |
1 |
80 @ 2.0 |
1 |
Database |
100 |
16 |
2 |
60 @ 8.0 |
15 |
Java-based tools |
110 |
1.2 |
1 |
25 @ 1.0 |
1.5 |
The overall resource requirements are: 630 pods, 630 CPU cores, 2,047 GB RAM, 1.9 TB storage, and 130 Gbps aggregate network bandwidth.
Our calculations using the workload information from Table 4 account for the following:
Certain cluster design considerations apply to estimating the required number of worker nodes uses. This section outlines these considerations.
The number of pods required to be deployed is 630, which is clearly above the limit of 250 pods per node. The minimum number of nodes based on the limit of 250 pods per worker node is: 630 / 250 = 3 nodes.
Table 6 provides estimates for the number of nodes that can be used to accommodate the projected workload in Table 5. The cluster might require 40, 27, or 14 worker nodes, depending on the design of the node. Field experience recommends caution in the use of estimates for production use.
The following table shows the available configurations:
Table 5. Calculated worker node alternate configurations based on Table 4 data
Worker node type (PowerEdge R640) |
Required node quantity |
Total CPU cores |
Total RAM (GB) |
Intel Gold 4208 CPU, 192 GB RAM, 2 x 25 GbE NICs |
40 |
640 |
7,680 |
Intel Gold 6226 CPU, 384 GB RAM, 4 x 25 GbE NICs |
27 |
648 |
10,368 |
Intel Gold 6252 CPU, 768 GB RAM, 2 x 100 GbE NICs |
14 |
672 |
10,752 |
Our minimum recommended master node configuration is a PowerEdge R640 server with dual Intel Gold 6226 CPUs and 192 GB RAM. As the Red Hat resource requirements show, this node is large enough for a 250-node cluster and higher. Because Dell Technologies recommends that you do not scale beyond 200 nodes, the proposed reference design is adequate for nearly all deployments. The following table shows the sizing recommendations:
Table 6. Master node sizing guide
Number of worker nodes |
CPU cores* |
Memory (GB) |
25 |
4 |
16 |
100 |
8 |
32 |
200 |
16 |
64 |
*Does not include provisioning of at least four cores per node for infrastructure I/O handling