As user ansible, unless otherwise specified, prepare and run Ansible playbooks as follows:
[ansible@csah python]$ sudo yum install python3
[ansible@csah python]$ sudo pip3 install pyyaml requests
[ansible@csah python]$ python3 generate_inventory_file.py
Figure 2. Inventory file generation input tasks menu
Note: If in doubt, accept the default values, if any, that are listed for each option.
provide complete path of directory to download OCP 4.3 software bits
default [/home/ansible/files]:
Option 1 downloads OpenShift Container Platform 4.3 software from RedHat into a directory for which user ansible has permissions. The instructions in this document assume that the directory is specified as /home/ansible/files.
i Enter bootstrap node details including node names, the IP address assigned for bond0, and the iDRAC IP address and credentials.
ii Using iDRAC credentials, from the list of available network devices, select one interface whose MAC addresses are used by DHCP and PXE boot.
Note: Ensure that the iDRAC IP address and credentials are accurate. If they are not accurate, an empty value ‘ ‘ is set as the MAC address, which results in the failure of the Ansible playbooks. A manual change is then necessary to add MAC address to ensure that Ansible playbooks can run.
enter the bootstrap node name
default [bootstrap]:
ip address for os in bootstrap node: 100.82.46.26
ip address for idrac in bootstrap node: 100.82.34.26
enter the idrac user for bootstrap: root
enter idrac password for bootstrap:
1 -> NIC.Integrated.1-1-1
2 -> NIC.Integrated.1-2-1
3 -> NIC.Slot.2-1-1
4 -> NIC.Slot.2-2-1
Select the interface used by DHCP: 3
i Add control plane and worker node details, including node names, the IP address that is assigned for bond0, and the iDRAC IP address and credentials.
Note: You must provide the IP address for bond0, node names, and other details for each node one at a time.
ii Using iDRAC credentials, from the list of available network devices, select one interface whose MAC addresses are used by DHCP and PXE boot.
The bond options default value is based on best practices. Do not change it. You can change the primary interface name if it is different from the default value.
Interface names are based on the slot in which the NIC is placed in the server. This document assumes that RHCOS is used for both control plane and worker nodes and that the slot 2 NIC ports are used. The remaining instructions assume that the interface names are ens2f0 and ens2f1.
Note: Port enumeration in RHCOS is based on Red Hat Enterprise Linux 8 standards.
enter bond name
default [bond0]:
enter bond interfaces separated by ','
default [ens2f0,ens2f1]:
enter bond options
default [mode=active-backup,miimon=100,primary=ens2f0]:
Note: This document assumes that the nvme drive in the first slot is used for the OpenShift installation.
ensure disknames are absolutely available. Otherwise OpenShift install fails
specify the master device that will be installed
default [nvme0n1]:
specify the bootstrap device that will be installed
default [nvme0n1]:
specify the worker device that will be installed
default [nvme0n1]:
specify zone file
default [/var/named/ocp.zones]:
specify cluster name
default [ocp]:
enter http port
default [8080]:
specify dir where ignition files will be placed
directory will be created under /var/www/html
default [ignition]:
specify the version of ocp
default [4.3]:
enter a default lease time for dhcp
default [800]:
enter max lease time for dhcp
default [7200]:
enter the user used to install openshift
DONOT CHANGE THIS VALUE
default [core]:
enter the directory where openshift installs
directory will be created under /home/core
default [openshift]:
enter the pod network cidr
default [10.128.0.0/14]:
pod network cidr: 10.128.0.0/14
specify cidr notation for number of ips in each node:
cidr number should be an integer and less than 32
default [23]:
specify the service network cidr
default [172.30.0.0/16]:
Note: Do not change the user value from core. Only the core user is allowed to SSH into cluster nodes.
To modify any incorrect values, rerun the related option and correct the values.
To review a sample file, see <git clone directory>/containers/ansible/hosts.
Note: You must have the appropriate Red Hat Customer Portal credentials to download the pull secret file.
Note: This document uses the /home/ansible/files directory containing the software bits.
vars:
software_src: /home/ansible/files
pull_secret_file: pullsecret
Note: Copy inventory_file from <git clone dir>/containers/python/ to <git clone dir>/containers/ansible/hosts.
[ansible@csah ansible] $ pwd
/home/ansible/containers/ansible
[ansible@csah ansible] $ ansible-playbook -i hosts> <git clone dir>/containers/ansible/ocp.yml
The CSAH node is installed and configured with HTTP, DHCP, DNS, and PXE services. Also, the install-config.yaml file is generated, and the ignition config files are created and made available over HTTP.