Home > Workload Solutions > Container Platforms > Red Hat OpenShift Container Platform > Guides > Design Guide—Red Hat OpenShift Container Platform 4.14 on AMD-powered Dell Infrastructure > Deployment process
To deploy a Red Hat OpenShift cluster, you can choose one of the following methods:
The cluster deployment process varies depending on the cluster topology, as described in the following section.
OpenShift Container Platform 4.14 offers different topologies for different workload requirements, with different levels of server hardware footprints and high availability (HA). The topologies are:
Ensure that:
The deployment begins with initial switch provisioning. The provisioning facilitates the preparation and installation of the CSAH node by:
Dell Technologies has generated Ansible playbooks that fully prepare both CSAH nodes. See User-provisioned infrastructure installation. For a SNO deployment, the Ansible playbook sets up a DHCP server and a DNS server. The CSAH node is also used as an admin host to perform operations and management tasks on SNO.
Note: For enterprise sites, consider deploying appropriately hardened DHCP and DNS servers and using resilient multiple-node High Availability Proxy (HAProxy) configuration. The Ansible playbook for this design can deploy multiple CSAH nodes for resilient HAProxy configuration. This guide provides CSAH Ansible playbooks for reference at the implementation stage. For more information, see HAProxy config tutorials.
The Ansible playbook creates a YAML file called install-config.yaml to control deployment of the bootstrap node.
An ignition configuration control file starts the bootstrap node. The following figure shows the installation workflow:
Note: An installation that is driven by ignition configuration generates security certificates that expire after 24 hours. You must install the cluster before the certificates expire, and the cluster must operate in a viable (nondegraded) state so that the first certificate rotation can be completed.
The cluster bootstrapping process consists of the following phases:
The cluster is now viable and can be placed into service in readiness for Day-2 operations. You can expand the cluster by adding more compute nodes for your requirements.
Dell Technologies has generated Ansible playbooks that fully prepare both CSAH nodes. Before the installation of the OpenShift Container Platform 4.14 cluster begins, the Ansible playbook sets up the PXE server, DHCP server, DNS server, HAProxy, and HTTP server. If a second CSAH node is deployed, the playbook also sets up DNS, HAProxy, HTTP, and KeepAlived services on that node. The playbook creates ignition files to drive installation of the bootstrap, control-plane, and compute nodes, and also starts the bootstrap VM to initialize control-plane components. The playbook presents a list of node types that must be deployed in top-down order.
The installer-provisioned infrastructure (IPI) installation on bare metal nodes provisions and configures the infrastructure on which an OpenShift cluster runs. OpenShift Container Platform manages all aspects of the cluster.
The CSAH node is used as the provisioner node and hosts infrastructure services such as DNS and an optional DHCP server. The bootstrap VM is hosted on the CSAH node for cluster setup. Dell-created Ansible playbooks set up CSAH nodes and automate the predeployment tasks, including configuring the CSAH node, downloading OpenShift installer, installing OpenShift client, and creating the manifest files that are required by the installer.
Cluster deployment using IPI installation is a two-phase process:
Using the Assisted Installer deployment method, you create a cluster configuration using the web-based UI or the RESTful API. A CSAH node accesses the cluster and hosts the DNS and DHCP server.
The interface prompts for required values and provides default values for the remaining parameters. After you enter all the required details, a bootable discovery ISO is generated that is used to boot the cluster nodes. Along with RHCOS, the bootable ISO also contains an agent that handles cluster provisioning. The bootstrapping process completes on one of the cluster nodes. A bootstrap node or VM is not required. After the nodes are discovered on the Assisted Installer console, you can select node role (control plane or compute), installation disk, and networking options. The Assisted Installer performs prechecks before starting the installation.
You can monitor the cluster installation status or download installation logs and kubeadmin user credentials from the Assisted Installer console.
This method is recommended for clusters with an air-gapped or disconnected network. You must download and install the agent-based installer on the CSAH node.
The bootable ISO contains the assisted discovery agent and the assisted service. The assisted service runs only one of the control-plane nodes, and the node eventually becomes the bootstrap host. The assisted service ensures that all the cluster hosts meet the requirements, and then triggers an OpenShift cluster deployment.
The following figure shows the cluster node installation workflow:
A mirror registry enables disconnected installations to host container images that are required for initial cluster deployment. The mirror registry must have access to the private network and to the Internet to obtain the necessary container images. The options are:
To use the mirror images for a disconnected installation, update the install-config.yaml file with the information in the imageContentSources section, as shown in the following example:
imageContentSources:
- mirrors:
- csah.dcws.lab:8443/ocp4/openshift414
source: quay.io/openshift-release-dev/ocp-release
- mirrors:
- csah.dcws.lab:8443/ocp4/openshift414
source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
Zero-Touch Provisioning (ZTP) provisions new edge sites using declarative configurations. ZTP can deploy an OpenShift cluster quickly and reliably in any environment: edge, remote office/branch office (ROBO), disconnected, or air-gapped. ZTP can deploy and deliver OpenShift 4.14 clusters in a hub-spoke architecture, where a unique hub cluster can manage multiple spoke clusters.
The following diagram shows how ZTP works in a far-edge environment:
Figure 3. OpenShift spoke cluster deployment using ZTP
ZTP uses the GitOps deployment method for infrastructure deployment. Declarative specifications are stored in Git repositories in predefined patterns such as YAML. Red Hat Advanced Cluster Management (ACM) for Kubernetes uses the declarative output for multisite deployment. GitOps addresses reliability issues by providing traceability, role-based access controls (RBAC) and a single source of truth regarding the state of each site. SiteConfig uses ArgoCD as the engine for the GitOps method of site deployment. After a site plan that contains all the required parameters for deployment is complete, a policy generator creates the manifests and applies them to the hub cluster.
For more information, see Installing managed clusters with RHACM and SiteConfig resources.
The following figure shows the ZTP flow that is used in a spoke-cluster deployment:
Figure 4. ZTP deployment flow
As an add-on to OpenShift Container Platform, OpenShift Virtualization enables you to run and manage VM workloads alongside container workloads. An enhanced web console provides a graphical portal to manage these virtualized resources alongside the OpenShift Container Platform cluster containers and infrastructure.
OpenShift Virtualization consists of the following components:
For more information, see OpenShift Virtualization architecture.
When planning the OpenShift Virtualization deployment, take into account that:
For more information, see Preparing your cluster for OpenShift Virtualization.
As an add-on feature, OpenShift Virtualization imposes an overhead cost that must be taken into account during the planning phase. Oversubscribing the physical resources in a cluster can affect performance. See physical resource overhead requirements.
Red Hat Hosted Control Planes
The introduction of hosted control planes for Red Hat OpenShift significantly simplifies the management of OpenShift clusters at scale, reducing both complexity and operational costs. This advancement enables you to create control planes as pods on a hosting cluster, eliminating the need for dedicated physical machines for each control plane.
To host control planes on OpenShift Container Platform version 4.14, you must:
For more information, see Installation of Hosted control planes.
Red Hat OpenShift AI is a platform for data scientists and developers of artificial intelligence (AI) applications. OpenShift AI enables data scientists to analyze data and provides a cloud-based instance of Jupyter Notebook. For more information, see Overview of OpenShift AI.
To use the OpenShift AI platform, you must: