Home > Workload Solutions > Container Platforms > Red Hat OpenShift Container Platform > Archive > Implementation Guide—Red Hat OpenShift Container Platform 4.10 on AMD-powered Dell Infrastructure > Installing a multinode cluster
At a high level, creating a multinode OpenShift Container Platform cluster consists of the following steps:
Start the cluster installation by creating a bootstrap KVM. The bootstrap KVM creates the persistent control plane that the control-plane nodes manage. The bootstrap KVM is created as a VM using a QEMU emulator in the CSAH node.
Note: This step is necessary because the Ansible playbooks configured a DNS setup in the CSAH node.
[root@csah-pri ~]# nmcli connection modify Bridge-mgmt ipv4.dns <IP address>
[root@csah-pri ~]# systemctl restart NetworkManager
[root@csah-pri ~]# cat /etc/resolv.conf
# Generated by NetworkManager
Search dcws.lab
nameserver <IP address>
The DNS IP is the keepalived IP that is specified in step 4 of Preparing and running the Ansible playbooks.
Note: If there is no secondary management node, specify the IP address that was configured for the primary CSAH node.
To use virt-install to create KVM, the Ansible playbooks generate a command and place it in the bootstrap_command file under the /home/ansible/files directory.
Note: Configure the graphical display to ensure that the PXE menu is displayed. If no graphical menu is set, connect to the virtual console in iDRAC and run the command in step 4. Ensure that PXE is enabled through a bridge interface.
[root@csah-pri ~] virt-install --name bootstrapkvm --ram 20480 --vcpu 8 --disk path=/home/bootstrapvm-disk.qcow2,format=qcow2,size=200 --os-variant generic --network=bridge=br0,model=virtio,mac=52:54:00:89:91:18 --pxe --boot uefi,hd,network &
Notes:
Do not change the MAC address. This address is autogenerated and added in the dhcpd.conf file by the Ansible playbooks. Adding the ampersand symbol (&) at the end ensures that the command is run in the background.
Ensure that the partition that is used to save the disk is of sufficient size. This example uses /home and allocates 200 G to the qcow2 image that is used by the bootstrap KVM. Size is a hard coded value. Reduce the size if there is not enough space.
The bootstrap KVM menu is displayed.
When the installation process is complete, KVM reboots into the hard disk.
[core@csah-pri ~]$ ssh bootstrap sudo ss -tulpn | grep -E '6443|22623|2379'
tcp LISTEN 0 128 *:22623 *:* users:(("machine-config-",pid=6972,fd=8))
tcp LISTEN 0 128 *:6443 *:* users:(("kube-apiserver",pid=7998,fd=8))
tcp LISTEN 0 128 *:2379 *:* users:(("etcd",pid=6036,fd=5))
To install the control-plane nodes:
The system boots automatically into the PXE network and displays the PXE menu, as shown in the following figure:
[core@bootstrap ~]$ journalctl -b -f -u release-image.service -u bootkube.service
Aug 29 12:40:03 bootstrap.dcws.lab bootkube.sh[22133]: I0829 12:40:03.262988 1 waitforceo.go:67] waiting on condition EtcdRunningInCluster in etcd CR /cluster to be True.
Aug 29 12:42:19 bootstrap.dcws.lab bootkube.sh[22133]: I0829 12:42:19.058805 1 waitforceo.go:64] Cluster etcd operator bootstrapped successfully
Aug 29 12:42:19 bootstrap.dcws.lab bootkube.sh[22133]: I0829 12:42:19.058901 1 waitforceo.go:58] cluster-etcd-operator bootstrap etcd
Aug 29 12:42:19 bootstrap.dcws.lab bootkube.sh[22133]: bootkube.service complete
Aug 29 12:42:19 bootstrap.dcws.lab systemd[1]: bootkube.service: Succeeded.
To complete the bootstrap process:
[core@csah-pri ~]$ ./openshift-install --dir=openshift wait-for bootstrap-complete --log-level debug
DEBUG OpenShift Installer 4.10.28
DEBUG Built from commit 6db5fb9d56c9284124cf9147afd8f3e79345e907
INFO Waiting up to 20m0s (until 9:04AM) for the Kubernetes API at https://api.ocp-amd.dcws.lab:6443...
INFO API v1.23.5+012e945 up
DEBUG Loading Install Config...
DEBUG Loading SSH Key...
DEBUG Loading Base Domain...
DEBUG Loading Platform...
DEBUG Loading Cluster Name...
DEBUG Loading Base Domain...
DEBUG Loading Platform...
DEBUG Loading Networking...
DEBUG Loading Platform...
DEBUG Loading Pull Secret...
DEBUG Loading Platform...
DEBUG Using Install Config loaded from state file
INFO Waiting up to 30m0s (until 9:14AM) for bootstrapping to complete...
DEBUG Bootstrap status: complete
INFO It is now safe to remove the bootstrap resources
INFO Time elapsed: 0s
[core@csah-pri ~]$ oc get nodes
NAME STATUS ROLES AGE VERSION
etcd-0.dcws.lab Ready master 3h v1.23.5+012e945
etcd-1.dcws.lab Ready master 3h v1.23.5+012e945
etcd-2.dcws.lab Ready master 3h v1.23.5+012e945
Note: In a three-node cluster, each control plane node has an additional ROLE worker along with the master node.
Note: In a five+ node cluster, compute nodes must be in the Ready state before the cluster operator AVAILABLE state is displayed as True.
Note: Skip these installation instructions for a three-node cluster.
To install the compute nodes:
The system automatically boots into the PXE network and displays the PXE menu, as shown in the following figure:
[core@csah-pri ~]$ oc get csr -o name | xargs oc adm certificate approve
[core@csah-pri ~]$ oc get nodes
NAME STATUS ROLES AGE VERSION
compute-1.dcws.lab Ready worker 1h v1.23.5+012e945
compute-2.dcws.lab Ready worker 1h v1.23.5+012e945
etcd-0.dcws.lab Ready master 3h v1.23.5+012e945
etcd-1.dcws.lab Ready master 3h v1.23.5+012e945
etcd-2.dcws.lab Ready master 3h v1.23.5+012e945
This section uses openshift for the install_dir variable. See the inventory file under <git clone dir>/python/generated_inventory for the value specified for the install_dir variable.
After the bootstrap, control-plane, and compute nodes are installed, complete the cluster setup:
[core@csah-pri ~]$ oc get clusteroperators
[core@csah-pri ~]$ ./openshift-install --dir=openshift wait-for install-complete --log-level debug
DEBUG OpenShift Installer 4.10.28
DEBUG Built from commit 6db5fb9d56c9284124cf9147afd8f3e79345e907
DEBUG Loading Install Config...
DEBUG Loading SSH Key...
DEBUG Loading Base Domain...
DEBUG Loading Platform...
DEBUG Loading Cluster Name...
DEBUG Loading Base Domain...
DEBUG Loading Platform...
DEBUG Loading Networking...
DEBUG Loading Platform...
DEBUG Loading Pull Secret...
DEBUG Loading Platform...
DEBUG Using Install Config loaded from state file
INFO Waiting up to 40m0s (until 10:32AM) for the cluster at https://api.ocp-amd.dcws.lab:6443 to initialize...
DEBUG Cluster is initialized
INFO Waiting up to 10m0s (until 10:02AM) for the openshift-console route to be created...
DEBUG Route found in openshift-console namespace: console
DEBUG OpenShift console route is admitted
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/core/openshift/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.ocp-amd.dcws.lab
INFO Login to the console with user: "kubeadmin", and password: "xxxx-xxxx-xxxx-xxxx"
A bootstrap node was created as part of the deployment procedure. Now that the OpenShift Container Platform cluster is running, you can remove this node.
bootstrap_node:
- name: bootstrap
ip: 192.168.46.26
mac: B8:59:9F:C0:35:86
[ansible@csah-pri ansible]$ ansible-playbook -i generated_inventory haocp.yaml
[ansible@csah-pri ansible]$ sudo virsh list
Id Name State
----------------------------------------------------
2 bootstrapkvm running
[ansible@csah-pri ansible]$ sudo virsh destroy bootstrapkvm
[ansible@csah-pri ansible]$ sudo virsh undefine --nvram bootstrapkvm
[ansible@csah-pri ansible]$ sudo rm -rf /home/bootstrapvm-disk.qcow2
Note: Replace the location of the qcow2 image as appropriate.