OpenShift Container Platform 4.2 can host the development and run-time execution of containerized applications, sometimes called “container workloads.” The platform uses the Kubernetes container orchestration toolchain that is core to modern automation container deployment, scaling, and management. OpenShift Container Platform 4.2 is designed to meet exacting demand-driven, scale-out capabilities for workloads. We expect the software platform to continue to mature and to expand rapidly, ensuring continued access to the tools you need to grow your business.
OpenShift Container Platform 4.2 is supported on Red Hat Enterprise Linux 7.6 as well as Red Hat Enterprise Linux CoreOS (RHCOS) 4.2. The OpenShift Container Platform 4.2 control plane can be deployed only on RHCOS. The control plane is hosted on master nodes. Either RHEL 7.6 or RHCOS can be deployed on compute nodes, known as worker nodes. Red Hat Enterprise Linux version 8 is not yet supported in OpenShift Container Platform.
Differences between OpenShift Container Platform 3.11 and OpenShift Container Platform 4.2 include:
This section further describes the new features and enhancements in OpenShift Container Platform 4.2.
OpenShift Container Platform 4.x introduced an Operator Framework to replace much of the functionality that was previously available with Helm and Helm Charts. An operator is a method by which Kubernetes-native applications are packaged and deployed into the Kubernetes run-time environment. An operator provides a key method for management of repetitive Kubernetes functional operations.
The functions that OLM supports include:
For more information, see Understanding the Operator Lifecycle Manager in the Red Hat OpenShift documentation.
Previously, we deployed OpenShift Container Platform 3.11 using the openshift-ansible tool. OpenShift Container Platform 4.2 uses ignition-based deployment, a new approach to getting your Kubernetes cluster operational quickly and simply. The ignition-based deployment tool is called openshift-install.
The ignition-based installation method supports two modes of deployment, installer-provisioned infrastructure and user-provisioned infrastructure.
For bare-metal deployment, which does not make use of a hypervisor, the Dell EMC Ready Stack deployment process uses the User Provisioned Infrastructure (UPI) method. The openshift-install tool requires very few install-time configuration settings. A post-installation Customer Resource Definition (CRD) facility is used to specify runtime configuration settings.
Over-the-air upgrades for asynchronous z-stream releases of OpenShift Container Platform 4.x are available. Cluster administrators can perform an upgrade by using the Cluster Settings tab in the web console. Updates are mirrored to the local container registry before being pushed to the cluster.
Currently, no facility exists for performing an in-place upgrade of an OpenShift 3.11 cluster to OpenShift 4.2. You must redeploy the cluster to use OpenShift 4.2. After deployment, OpenShift 4.2 is capable of automatic updating, and it will likely be possible to enable automatic upgrading to later releases. Red Hat is developing tooling to enable migration of OpenShift 3.7 and later clusters to OpenShift 4.2. For more information, see this Red Hat documentation.
OperatorHub helps administrators discover and install optional components and applications. It supports add-on tools and utilities from Red Hat, Red Hat partners, and the open source community.
OpenShift Container Platform 4.2 provides support for CSI 1.0, the container storage operator, and for the manila-provisioner/operator and snapshot operator.
Red Hat has added many other capabilities to the OpenShift Container Platform to make your container development process easier and more agile and to simplify deployment and management operations in production. For more information, see Understanding persistent storage in the OpenShift documentation.
OpenShift Container Platform 4.2 introduces the three basic host types that make up every cluster: the bootstrap node, master nodes, and worker nodes.
The deployment process also requires a node called the Cluster System Admin Host (CSAH), but it is not mentioned in Red Hat online documentation. The CSAH node is not part of the cluster but is required for OpenShift cluster administration. While you could log in to a master node to manage the cluster, this practice is not recommended. The OpenShift CLI administration tools are deployed onto the master nodes; however, the authentication tokens that are needed to administer the OpenShift cluster are installed only on the CSAH node as part of the deployment process.
Dell EMC recommends provisioning a dedicated host for administration of the OpenShift cluster. After the cluster is installed and started, the bootstrap node is repurposed as a worker node.
When your CSAH node is operational, installation of the cluster begins with the creation of a bootstrap node. This node is needed only during the bring-up phase of OpenShift cluster installation. When the initial minimum cluster—the master nodes and at least two worker nodes—is operational, you can redeploy the bootstrap node as a worker node. The bootstrap node is necessary to create the persistent control plane that is managed by the master nodes.
Three master nodes are required to control the operation of a Kubernetes cluster. In OpenShift Container Platform, the master nodes are responsible for all control plane operations. The control plane operates outside the application container workloads and is responsible for ensuring the overall continued viability, health, and integrity of the container ecosystem. Any nodes that implement control plane infrastructure management are called master nodes.
Master nodes operate outside the MachineType framework. They consist of machines that provide an API for overall resource management. Master nodes cannot be removed from a cluster. The master nodes provide HAProxy services and run etcd, the API server, and the Controller Manager Server.
In an OpenShift Kubernetes-based cluster, all application containers are deployed to run on worker nodes. Worker nodes advertise their resources and resource utilization so that the scheduler can allocate containers and pods to worker nodes and maintain a reasonable workload distribution. The CRI-O Kubelet service runs on each worker node. This service receives container deployment requests and ensures that they are instantiated and put intooperation. The Kubelet service also starts and stops container workloads. In addition, this service manages a service proxy that handles communication between pods that are running across worker nodes.
Logical constructs called MachineSets define worker node resources. MachineSets can be used to match requirements for a pod to direct deployment to a matching worker node. OpenShift Container Platform supports defining multiple machine types, each of which defines a worker node target type. A future release of OpenShift Container Platform will support specifically classified worker node types, such as AI hosts, infrastructure hosts, NFV hosts, and more.
Worker nodes can be added to or deleted from a cluster as long as the viability of the cluster is not compromised. A minimum of two viable worker nodes must be operating at all times. Further, sufficient compute platform resources must be available to sustain the overall cluster application container workload.
Dell EMC has simplified the process of bootstrapping your first OpenShift Container Platform 4.2 cluster. To use the simplified process, ensure that your rack has been provisioned with suitable network switches and servers, that network cabling has been completed, and that Internet connectivity has been provided to the rack. Internet connectivity is necessary for the installation of OpenShift Container Platform 4.2.
The deployment procedure begins with initial switch provisioning. This step enables preparation and installation of the CSAH node, which includes:
Dell EMC has generated Ansible playbooks that fully prepare the CSAH node. Before installation of the OpenShift Container Platform 4.2 cluster begins, the Ansible playbook sets up a PXE server, DHCP server, DNS server, and HTTP server. The playbook also creates the ignition files that you need to drive your installation of the bootstrap, master, and worker nodes, and it configures HAProxy so that the installation infrastructure is ready for the next step. The Ansible playbook presents a list of node types that must be deployed in top-down order.
The Ansible playbook creates an installconfig file that is used to control deployment of the bootstrap node. The following figure shows the workflow to generate the installconfig file:
Figure 1. Generating the installconfig file
An ignition configuration control file starts the bootstrap node, as shown in the following figure:
Figure 2. OpenShift Container Platform 4.2 installation workflow: Creating the bootstrap, master, and worker nodes
The cluster bootstrapping process involves the following phases:
The master node (control plane) now drives creation and instantiation of the worker nodes.
Your cluster is now viable and can be placed into service in readiness for Day-2 operations. You can expand the cluster by adding worker nodes.