The following steps create a worker node by using kickstart files that are generated by using Ansible playbooks. Kickstart files references to the operating system ISO file are in the inventory file. See step 8 on page 20.
Note: Dell CSI drivers for PowerMax or Dell EMC Isilon nodes powered by Dell EMC PowerScale OneFS do not support Red Hat Linux CoreOS (RHCOS) as a worker node.
To install worker nodes:
- Connect to the iDRAC of a worker node and open the virtual console.
- In iDRAC GUI, click Configuration and select BIOS Settings.
- Expand Network Settings.
- Set PXE Device1 to Enabled.
- Expand PXE Device1 Settings.
- Select NIC in Slot 2 Port 1 Partition 1 as the interface.
- Scroll to the bottom of the Network Settings section and click Apply.
The system automatically boots into the PXE network and displays the PXE menu, as shown in the following figure:

Figure 7. iDRAC console PXE menu
- Select worker-0 (the first worker node) and let the system reboot after the installation. Before the node reboots into the PXE, ensure that the hard disk is placed above the PXE interface in the boot order, as follows:
- Press F2 to enter System Setup.
- Select System BIOS > Boot Settings > UEFI Boot Settings > UEFI Boot Sequence.
- Select PXE Device 1 and click -.
- Repeat the preceding step until PXE Device 1 is at the bottom of the boot menu.
- Click OK and then click Back.
- Click Finish and save the changes.
- Allow the node to boot into the hard drive where the operating system is installed.
- After the node comes up, ensure that the default users specified in the kickstart file exist:

Figure 8. Worker (worker-0) iDRAC console
- On the CSAH node, as user root, run ssh worker-0 to ensure that the correct IP address is assigned to bond0.
Note: The default password used in the kickstart files is password. This default password is also used for user use and user ansible.
- From the CSAH node and as user ansible, copy the ssh keys to the worker node using the following command:
[ansible@csah ~]$ ssh-copy-id worker-0.example.lab
- Modify the values of subscription user, password, and pool id in the Ansible rhel_inv_file inventory file that is available in the <git clone dir>/ansible directory.
- Validate the Ansible role compute.yaml file that is available in the <git clone dir>/ansible directory.
- On the CSHA node, run the Ansible playbook. The playbook installs all prerequisites that are required to set up the worker node to join the cluster.
Note: Ensure that the hostname for hosts key in the rhel_inv_file file is correct and the FQDN is specified (for example, worker-0.example.lab).
[ansible@csah ansible]$ pwd
/home/ansible/openshift-bare-metal/ansible
[ansible@csah ~]$ ansible-playbook -i rhel_inv_file compute.yaml
- Modify the rh_rhel_worker inventory file in the <git clone dir>/ansible directory with the appropriate values, as shown in the following example:
[all:vars]
ansible_user=ansible
ansible_become=True
openshift_kubeconfig_path="/home/ansible/kubeconfig"
[new_workers]
worker-0.example.lab
- On the CSAH node, run the playbook provided by Red Hat as user ansible to add the worker node to the existing cluster:
[ansible@csah openshift-ansible]$ pwd
/usr/share/ansible/openshift-ansible
[ansible@csah openshift-ansible]$ ansible-playbook -i <git clone dir>/ansible/rh_rhel_worker playbooks/scaleup.yml
Note: The playbook runs in approximately 10 minutes. During this time, the compute node reboots and joins the existing cluster by auto approving the Certificate Signing Request (csr).
- Check the status of the worker node:
[core@csah ~]$ oc get nodes
NAME STATUS ROLES AGE VERSION
etcd-0.example.lab Ready master 5h44m v1.16.2
etcd-1.example.lab Ready master 5h44m v1.16.2
etcd-2.example.lab Ready master 5h44m v1.16.2
worker-0.example.lab Ready worker 139m v1.16.2
- Repeat the preceding steps for the second worker node, selecting worker-1 from the PXE menu in step 3.