A cloud-native infrastructure must accommodate a large, scalable mix of service-oriented applications and their dependent components. These applications and components are generally microservice-based. The The key to sustaining their operation is to have the right platform infrastructure and a sustainable management and control plane. This reference design helps you specify infrastructure requirements for building an on-premises OpenShift Container Platform 4.6 solution.
The following figure shows the solution design:
Figure 3. OpenShift Container Platform 4.6 cluster design
This Ready Stack design recognizes four host types that make up every OpenShift Container Platform cluster: the bootstrap node, control-plane nodes, compute nodes, and storage nodes.
The deployment process also requires a node called the Cluster System Admin Host (CSAH). A description of the process is available in the Ready Stack for Red Hat OpenShift Container Platform 4.6 Deployment Guide at the Dell Technologies Solutions Info Hub for Containers.
Note: Red Hat official documentation does not refer to a CSAH node in the deployment process.
The CSAH node is not part of the cluster, but it is required for OpenShift cluster administration. Dell Technologies strongly discourages logging in to a control plane node to manage the cluster. The OpenShift CLI administration tools are deployed onto the control plane nodes, while the authentication tokens that are required to administer the OpenShift cluster are installed on the CSAH node only as part of the deployment process.
Note: Control-plane nodes are deployed using immutable infrastructure, further driving the preference for an administration host that is external to the cluster.
The CSAH node manages the operation and installation of the container ecosystem cluster. Installation of the cluster begins with the creation of a bootstrap VM on the CSAH node, which is used to install control-plane components on the controller nodes. Delete the bootstrap VM after the control plane is deployed. Dell Technologies recommends provisioning a dedicated host for administration of the OpenShift Container cluster. The initial minimum cluster can consist of three nodes running both the control plane and applications, or three control-plane nodes and at least two compute nodes. OpenShift Container Platform requires three control-plane nodes in both scenarios.
Node components are installed and run on every node within the cluster; that is, on controller nodes and compute nodes. The components are responsible for all node runtime operations. Key components consist of:
Nodes that implement control plane infrastructure management are called controller nodes. Three controller nodes establish the control plane for the operation of an OpenShift cluster. The control plane operates outside the application container workloads and is responsible for ensuring the overall continued viability, health, availability, and integrity of the container ecosystem. Removing controller nodes is not allowed. OpenShift Container Platform also deploys additional control-plane infrastructure to manage OpenShift-specific cluster components.
The control plane provides the following functions:
In an OpenShift cluster, application containers are deployed to run on compute nodes, by default. The term “compute node” is arbitrary; nothing specific is required to run compute nodes and, therefore, applications can be run on control plane nodes. Cluster nodes advertise their resources and resource utilization so that the scheduler can allocate containers and pods to these nodes and maintain a reasonable workload distribution. The Kubelet service runs on each compute node. This service receives container deployment requests and ensures that the requests are instantiated and put into operation. The Kubelet service also starts and stops container workloads and manages a service proxy that handles communication between pods that are running across compute nodes.
Logical constructs called MachineSets define compute node resources. MachineSets can be used to match requirements for a pod deployment to a matching compute node. OpenShift Container Platform supports defining multiple machine types, each of which defines a compute node target type.
Compute nodes can be added to or deleted from a cluster if doing so does not compromise the viability of the cluster. If the control plane nodes are not designated as schedulable, at least two viable compute nodes must always be operating. Further, enough compute platform resources must be available to sustain the overall cluster application container workload.
Storage can be either provisioned from dedicated nodes or shared with compute services. Provisioning occurs on disk drives that are locally attached to servers that have been added to the cluster as compute nodes.
OpenShift Container Storage (OCS), which is deployed after the cluster deployment, simplifies and automates the deployment of storage for cloud-native container use. To integrate Ceph OCS storage into the container ecosystem infrastructure, administrators must provision appropriate storage nodes. It is also possible to use existing compute nodes if they meet OpenShift Container Storage hardware requirements.
You can initiate the deployment of OCS from the embedded OperatorHub when you are logged into OpenShift Container Platform as the cluster administrator. For more information, see OpenShift Container Platform 4.6 Documentation.