Home > Workload Solutions > Container Platforms > Red Hat OpenShift Container Platform > White Papers > White Paper—Red Hat OpenShift Container Platform 4.12 on Dell Infrastructure > OpenShift cluster topologies
The following OpenShift cluster topologies provide different levels of server footprint and high availability (HA):
The following figure shows the design for a multinode OpenShift cluster on Dell infrastructure:
The deployment process requires an admin node called Cluster System Admin Host (CSAH). The CSAH node can also act as the provisioner node for deploying OpenShift Cluster Platform on Installer Provisioned Infrastructure (IPI). For more information, see Installation Overview.
Note: Red Hat official documentation does not refer to a CSAH node in the deployment process.
CSAH nodes are not part of the OpenShift cluster, but are recommended for OpenShift cluster administration and operation. CSAH nodes can also provision additional infrastructure services if required, such as Dynamic Host Configuration Protocol (DHCP), Preboot Execution Environment (PXE), Domain Name System (DNS), and HAProxy services for cluster deployment and operation.
You can use a single CSAH node for development and testing purposes, but this approach does not provide resilient load-balancing. For resilient load-balancing, Dell Technologies recommends:
Note: Control-plane nodes are deployed using immutable infrastructure, further driving the preference for an administration host that is external to the cluster.
The CSAH nodes manage the operation and installation of the container ecosystem cluster. Cluster installation begins with creating a bootstrap VM on the primary CSAH node to install control-plane components on the nodes. The initial minimum cluster can consist of three nodes that run both the control plane and applications, or three control-plane nodes and at least two compute nodes.
Node components are installed and run on every node in the cluster. The following components are responsible for all node run-time operations:
For more information, see Overview of nodes.
Control-plane nodes implement control-plane infrastructure management. Three control-plane nodes establish a unified control plane for the operation of an OpenShift cluster. The control plane operates outside the application container workloads and provides the overall continued viability, health, availability, and integrity of the container ecosystem.
The control plane provides Kubernetes services and OpenShift services:
Although OpenShift Container Platform is resilient to node failure, regularly backing up the etcd data store is recommended. These backups are a blocking procedure, so taking them at off-peak hours is recommended in production environments. For example, if you update a cluster within minor versions (such as from 4.12.2 to 4.12.3), take an etcd backup of the version of OpenShift Container Platform that is running on your cluster or clusters. Also take etcd backups 24 hours after the cluster has been installed to let the initial rotation of certificates occur and ensure that expired certificates are not present. For more information, see Backing up etcd.
Because of the etcd quorum requirements, if enough control-plane nodes fail (which would cause most control planes to stop operating), restoring from a previous cluster state becomes the only option for cluster recovery. Follow the steps in Restoring to a previous cluster state. If most control-plane nodes are still operating, which means that quorum can be achieved but there is no redundancy for further node failure, replace the unhealthy etcd members. Follow the steps in Replacing an unhealthy etcd member.
In an OpenShift cluster, application containers are deployed to run on compute nodes by default. Cluster nodes advertise their resources and resource use so that the scheduler can allocate containers and pods to these nodes and maintain a reasonable workload distribution. The Kubelet service runs on all nodes in a Kubernetes cluster. This service receives container deployment requests and ensures that the requests are instantiated and put into operation. The Kubelet service also starts and stops container workloads and manages a service proxy that handles communication between pods that are running across compute nodes.
Logical constructs called machine sets define compute node resources. Machine sets can match requirements for a pod deployment to a matching compute node. OpenShift Container Platform supports defining multiple machine types, each of which defines a compute node target type.
You can add or delete compute nodes only if doing so does not compromise cluster viability. If the control-plane nodes are not designated as schedulable, at least two viable compute nodes must always be operating to run router pods that manage ingress networking traffic. Further, enough compute platform resources must be available to sustain the overall cluster application container workload.
For more information, see the following Red Hat OpenShift documentation: