Home > Workload Solutions > Container Platforms > Red Hat OpenShift Container Platform > Archive > Red Hat OpenShift Container Platform 4.10 on Dell Infrastructure > Compute plane
In an OpenShift cluster, application containers are deployed to run on compute nodes by default. (The term “compute node” is arbitrary; nothing specific is required to run compute nodes, and applications can be run on control-plane nodes, if wanted.) Cluster nodes advertise their resources and resource utilization so that the scheduler can allocate containers and pods to these nodes and maintain a reasonable workload distribution. The Kubelet service runs on all nodes in a Kubernetes cluster. This service receives container deployment requests and ensures that the requests are instantiated and put into operation. The Kubelet service also starts and stops container workloads and manages a service proxy that handles communication between pods that are running across compute nodes.
Logical constructs called MachineSets define compute node resources. MachineSets can be used to match requirements for a pod deployment to a matching compute node. OpenShift Container Platform supports defining multiple machine types, each of which defines a compute node target type.
Compute nodes can be added to or deleted from a cluster if doing so does not compromise the viability of the cluster. If the control-plane nodes are not designated as schedulable, at least two viable compute nodes must always be operating to run router pods that manage ingress networking traffic. Further, enough compute platform resources must be available to sustain the overall cluster application container workload.
VMs can be used as control plane nodes in an OpenShift cluster. VMs can offer more flexibility, efficiency, and a faster initial setup time. The VMWare vSphere hypervisor can be used to create and manage VMs. System administrators can rely on VMWare features for HA and fault tolerance.