Note: Ignore this section if the cluster is a 3-node setup.
Follow these steps:
- Connect to the iDRAC of a compute node and open the virtual console.
- In the iDRAC UI, click Configuration and select BIOS Settings.
- Expand Network Settings.
- Set PXE Device1 to Enabled.
- Expand PXE Device1 Settings.
- Select NIC in Slot 2 Port 1 Partition 1 as the interface.
- Scroll to the bottom of the Network Settings section and click Apply.
The system automatically boots into the PXE network and displays the PXE menu, as shown in the following figure:
Figure 7. iDRAC console PXE menu
- Select compute-1 and let the system reboot after the installation. Before the node reboots into the PXE, ensure that the hard disk is placed above the PXE interface in the boot order:
- Press F2 to enter System Setup.
- Select System BIOS > Boot Settings > UEFI Boot Settings > UEFI Boot Sequence.
- Select PXE Device 1 and click -.
- Repeat step c until PXE Device 1 is at the bottom of the boot menu.
- Click OK and then click Back.
- Click Finish and save the changes.
- Let the node boot into the hard drive where the operating system is installed, as shown in the following figure:
Figure 8. iDRAC console: compute-1
- Repeat the preceding steps for the remaining compute nodes. Then:
- Skip steps 6 through 13 if RHCOS is the compute node operating system.
- Continue with steps 6 through 13 if the control node operating system is Red Hat Enterprise Linux 7.9.
- For Red Hat Enterprise Linux compute nodes, ensure that the default users who are specified in the kickstart file exist, as shown in the following figure:
Figure 9. iDRAC console: compute-3
- In the CSAH node, as user root, run ssh compute-3 to ensure that the correct IP address is assigned to bond0.
The default password that is used in the kickstart files is password. This default password is also used for user user and ansible.
- From the CSAH node and as user ansible, copy the ssh keys to the compute node by running:
[ansible@csah ~]$ ssh-copy-id compute-3.example.com
- Modify the values of subscription user, password, and pool id in the Ansible rhel_inv_file inventory file that is available in the <git clone dir>/ansible/inventory directory.
Note: Ensure that the hostname for hosts key in the rhel_inv_file file is correct and that the FQDN is specified (for example, compute-3.example.com).
- Validate the Ansible role compute.yaml file that is available in the <git clone dir>/ansible directory.
- On the CSAH node, run the Ansible playbook. The playbook installs all prerequisites that are required to set up the compute node to join the cluster.
[ansible@csah ansible]$ pwd
[ansible@csah ~]$ ansible-playbook -i inventory/rhel_inv_file compute.yaml
- Modify the rh_rhel_worker inventory file in the <git clone dir>/ansible/inventory directory with the appropriate values, as shown in the following example:
- On the CSAH node, as user ansible run the playbook that Red Hat has provided to add the compute node to the existing cluster:
[ansible@csah openshift-ansible]$ pwd
[ansible@csah openshift-ansible]$ ansible-playbook -i <git clone dir>/ansible/inventory/rh_rhel_worker playbooks/scaleup.yml
The playbook runs in approximately 10 minutes. During this time, the compute node reboots and joins the existing cluster by auto approving the Certificate Signing Request (csr).
- As user core in CSAH node, approve the csr to ensure RHCOS-based compute nodes are added into the cluster.
[core@csah ~]$ oc get csr -o name | xargs oc adm certificate approve
- Check the status of the compute nodes and verify that all compute nodes are listed and their status is READY:
[core@csah ~]$ oc get nodes
NAME STATUS ROLES AGE VERSION
compute-1.example.com Ready worker 6d7h v1.19.0+43983cd
compute-2.example.com Ready worker 6d7h v1.19.0+43983cd
compute-3.example.com Ready worker 81s v1.19.0+7070803
etcd-0.example.com Ready master 6d12h v1.19.0+43983cd
etcd-1.example.com Ready master 6d12h v1.19.0+43983cd
etcd-2.example.com Ready master 6d12h v1.19.0+43983cd