Home > Workload Solutions > Container Platforms > Red Hat OpenShift Container Platform > Guides > Design Guide—Red Hat OpenShift Container Platform 4.14 on Intel-powered Dell Infrastructure > Deployment process
To deploy a Red Hat OpenShift cluster, use one of the following methods:
The deployment process for OpenShift nodes varies depending on the cluster topology, as described in the following section.
OpenShift Container Platform 4.14 offers different topologies for different workload requirements, with different levels of server hardware footprints and high availability (HA).
Ensure that:
The deployment begins with initial switch provisioning. The provisioning enables the preparation and installation of the CSAH node by:
Dell Technologies has generated Ansible playbooks that fully prepare both CSAH nodes. See User-provisioned infrastructure installation. For a SNO deployment, the Ansible playbook sets up a DHCP server and a DNS server. The CSAH node is also used as an admin host to perform operations and management tasks on SNO.
Note: For enterprise sites, consider deploying appropriately hardened DHCP and DNS servers and using resilient multiple-node HAProxy configuration. The Ansible playbook for this design can deploy multiple CSAH nodes for resilient HAProxy configuration. This guide provides CSAH Ansible playbooks for reference at the implementation stage.
The Ansible playbook creates a YAML file called install-config.yaml to control deployment of the bootstrap node.
The following figure shows the installation workflow. An ignition configuration control file starts the bootstrap node.
Note: An installation that is driven by ignition configuration generates security certificates that expire after 24 hours. You must install the cluster before the certificates expire, and the cluster must operate in a viable (nondegraded) state so that the first certificate rotation can be completed.
The cluster bootstrapping process consists of the following phases:
The cluster is now viable and can be placed into service in readiness for Day-2 operations. You can expand the cluster by adding more compute nodes for your requirements.
Dell Technologies has generated Ansible playbooks that fully prepare both CSAH nodes. Before the installation of the OpenShift Container Platform 4.14 cluster begins, the Ansible playbook sets up the PXE server, DHCP server, DNS server, HAProxy, and HTTP server. If a second CSAH node is deployed, the playbook also sets up DNS, HAProxy, HTTP, and KeepAlived services on that node. The playbook creates ignition files to drive installation of the bootstrap, control-plane, and compute nodes, and also starts the bootstrap VM to initialize control-plane components. The playbook presents a list of node types that must be deployed in top-down order.
The installer-provisioned infrastructure (IPI) installation on bare metal nodes provisions and configures the infrastructure on which an OpenShift cluster runs. OpenShift Container Platform manages all aspects of the cluster.
The CSAH node is used as the provisioner node and hosts infrastructure services such as DNS and an optional DHCP server. The bootstrap VM is hosted on the CSAH node for cluster setup. Dell-created Ansible playbooks set up CSAH nodes and automate the predeployment tasks, including configuring the CSAH node, downloading OpenShift installer, installing OpenShift client, and creating the manifest files that are required by the installer.
Cluster deployment using IPI installation is a two-phase process.
Using the Assisted Installer deployment method, you create a cluster configuration using the web-based UI or the RESTful API. A CSAH node accesses the cluster and hosts the DNS and DHCP server.
The interface prompts for required values and provides default values for the remaining parameters unless you change them. After you have entered all the required details, a bootable discovery ISO is generated that is used to boot the cluster nodes. Along with RHCOS, the bootable ISO also contains an agent that handles cluster provisioning. The bootstrapping process completes on one of the cluster nodes. A bootstrap node or VM is not required. After the nodes are discovered on the Assisted Installer console, you can select node role (control plane or compute), installation disk, and networking options. The Assisted Installer performs prechecks before starting the installation.
You can monitor the cluster installation status or download installation logs and kubeadmin user credentials from the Assisted Installer console.
This method is recommended for clusters with an air-gapped or disconnected network. You must download and install the agent-based installer on the CSAH node.
The bootable ISO contains the assisted discovery agent and the assisted service. The assisted service runs only one of the control-plane nodes, and the node eventually becomes the bootstrap host. The assisted service ensures that all the cluster hosts meet the requirements, and then triggers an OpenShift cluster deployment.
The following figure shows the cluster node installation workflow: