As user ansible (unless otherwise specified), prepare and run the Ansible playbooks.
Note: Ensure that the CSAH node can reach the iDRAC network IPs. If there is no connectivity, manually create the inventory file by following the steps in the sample file in GitHub.
Note: Ensure that only values in the YAML file are modified. Keys must always remain the same.
[ansible@csah python]$ python3 generate_inventory_file.py
usage: generate_inventory_file.py [-h] [--run | --add] --ver {4.6} --nodes NODES [--id_user ID_USER] [--id_pass ID_PASS] [--debug]
Generate Inventory
optional arguments:
-h, --help show this help message and exit
--run generate inventory file
--add number of compute nodes
--ver {4.6} specify OpenShift version
--nodes NODES nodes inventory file
--id_user ID_USER specify idrac user
--id_pass ID_PASS specify idrac user
--debug specify debug logs
Note: If the iDRAC user and password are the same across all control-plane and compute nodes, run the program with arguments --id_user and --id_pass.
[ansible@csah python]$ python3 generate_inventory_file.py
–-run --id_user <user> --id_pass <password> --ver 4.6 –-nodes nodes.yaml
Note: In the argument that is passed, --ver 4.6 specifies the OpenShift version. Currently, the script accepts only one value: 4.6. The nodes.yaml file that you updated in Step 1 includes information about the bootstrap, control-plane, and compute nodes.
A list of numbered tasks is displayed, as shown in the following figure:
Figure 2. Inventory file generation input tasks menu
Note: If you are unsure about what value to enter for an option, accept the default value if it is provided.
provide complete path of directory to download OCP 4.6 software bits
default [/home/ansible/files]:
Option 1 downloads OpenShift Container Platform 4.6 software from Red Hat into a directory for which user ansible has permissions. This guide assumes that the directory is specified as /home/ansible/files.
i Enter the cluster installation options by selecting 3 node or 6+ node:
task choice for necessary inputs: 2
supported cluster install options:
1. 3 node (control/compute in control nodes)
2. 6+ node (3 control and 3+ compute)
enter cluster install option: 2
Note: OpenShift 4.6 supports the 3 node and 6+ node cluster options. The following example shows the steps to follow if you select a 6+ node cluster installation. If you select the 3 node installation option, you are not prompted for information about compute nodes.
ii Specify the bootstrap node name and the IP address to be assigned to the bootstrap node:
enter the bootstrap node name
default [bootstrap]:
ip address for os in bootstrap node: 192.168.46.19
Note: Leave the IP address 192.168.42.19 that you specified in the preceding step unassigned. The bootstrap node is created as a KVM using virt-install.
Note: The following example assumes that three control-plane nodes are set up in the cluster. NIC.Slot.2-1-1 is used for DHCP, and PXE boot is enabled in the interface. Bonding is performed through two interfaces: NIC.Slot.2-1-1 and NIC.Slot.2-2-1. If only one interface is available, specify NO.
Do you want to perform bonding (y/NO): y
ip address for os in etcd-0 node: 192.168.46.21
ip address for idrac in etcd-0 node: 192.168.34.21
1 -> NIC.Integrated.1-1-1
2 -> NIC.Integrated.1-2-1
3 -> NIC.Slot.2-1-1
4 -> NIC.Slot.2-2-1
Select the interface used by DHCP: 3
selected interface is: NIC.Slot.2-1-1
device NIC.Slot.2-1-1 mac address is B8:59:9F:C0:36:46
1 -> NIC.Integrated.1-1-1
2 -> NIC.Integrated.1-2-1
3 -> NIC.Slot.2-1-1
4 -> NIC.Slot.2-2-1
Select the interface used by etcd-0 active bond interface: 3
selected interface is: NIC.Slot.2-1-1
1 -> NIC.Integrated.1-1-1
2 -> NIC.Integrated.1-2-1
3 -> NIC.Slot.2-1-1
4 -> NIC.Slot.2-2-1
Select the interface used by etcd-0 backup bond interface: 4
selected interface is: NIC.Slot.2-2-1
Note: The selected network interface determines the calculation for network enumeration ens2fo. This network enumeration logic is tested in PowerEdge R640 servers. Select two interfaces, one for each “slave” interface in bond.
Note:
This step is not necessary if you selected 3 node in Step 4, substep b.i. The compute node supports either Red Hat Enterprise Linux 7.9 or RHCOS 4.6 as the operating system.
Specify information relating to bonding and the interfaces that bonding uses for each compute node (see Step 4, substep b.iii for control-plane nodes).
ensure disknames are absolutely available. Otherwise OpenShift install fails
specify the control plane device that will be installed
default [nvme0n1]:
specify the compute node device that will be installed
default [nvme0n1]:
Note: This guide assumes that the NVMe drive in the first slot is used for the OpenShift installation.
specify cluster name
default [ocp]:
specify zone file
default [/var/named/ocp.zones]:
enter http port
default [8080]:
specify dir where ignition files will be placed
directory will be created under /var/www/html
default [ignition]:
enter the user used to install openshift
DONOT CHANGE THIS VALUE
default [core]:
enter the directory where openshift installs
directory will be created under /home/core
default [openshift]:
enter the pod network cidr
default [10.128.0.0/14]:
pod network cidr: 10.128.0.0/14
specify cidr notation for number of ips in each node:
cidr number should be an integer and less than 32
default [23]:
specify the service network cidr
default [172.30.0.0/16]:
Note: Do not change the user value from core. Only the core user can connect into cluster nodes by using SSH. The CNI options are the specified information.
To modify any values, run the related option again and correct the values.
Note: This guide uses the /home/ansible/files directory containing the software bits.
vars:
software_src: /home/ansible/files
pull_secret_file: pullsecret
rhel_os: rhel-server-7.9-x86_64-dvd.iso
Note: Copy the generated_inventory file from the <git clone dir>/python/ directory to <git clone dir>/ansible directory.
[ansible@csah ansible] $ pwd
/home/ansible/openshift-bare-metal/ansible
[ansible@csah ansible] $ ansible-playbook -i generated_inventory ocp.yaml
The CSAH node is installed and configured with HTTP, HAProxy, DHCP, DNS, and PXE services. Also, the install-config.yaml file is generated, and the ignition config files are created and made available over HTTP.
Note: If any errors occur while the program is running, see the inventory.log file under the <git clone dir>/python directory to find out what went wrong and how to resolve the issue.