Home > Workload Solutions > Container Platforms > Red Hat OpenShift Container Platform > Archive > Implementation Guide—Red Hat OpenShift Container Platform 4.10 on Intel-powered Dell Infrastructure > Preparing and running the Ansible playbooks
In the primary CSAH node, prepare and run the Ansible playbooks as user ansible.
Note: Ensure that the CSAH node can reach the iDRAC network IPs. If there is no connectivity, manually create the inventory file by following this sample file. Ignore step 1 if the inventory file is to be created manually.
Note: Ensure that you only modify values in the YAML file. Keys must always remain the same.
Note: Only RHCOS is supported for this release. Leave the value for the ‘os’ key in the nodes.yaml file as rhcos.
Note: If the iDRAC user and password are the same across all control-plane and compute nodes, run the program with the --id_user and --id_pass arguments.
[ansible@csah-pri ]$ cd <git clone dir>/openshift-bare-metal/python
[ansible@csah-pri python]$ python3 generate_inventory_file.py
–-run --id_user <idrac user> --id_pass <idrac password> --release 4.10 –-nodes nodes.yaml
Note: In the argument that is passed, --release 4.10 specifies the OpenShift version and is the only value that the script accepts. The nodes.yaml file that is updated in the preceding step includes information about the bootstrap, control-plane, and compute nodes. In case of single node deployment, use the generate_inventory_file.py inside <git clone dir>/openshift-bare-metal/sno-ansible/python directory.
Is there a backup management node [yes/No]: yes
Enter backup management node FQDN: csah-sec.dcws.lab Enter the IP address of VIP used for HAProxy: <IP>
A menu of numbered tasks is displayed, as shown in the following figure:
Note: Run the program with all the tasks to ensure that all the necessary keys that are used in the Ansible playbooks are generated.
provide complete path of directory to download OCP 4.10 software bits
default [/home/ansible/files]:
Option 1 downloads OpenShift Container Platform 4.10 software from Red Hat into a directory for which user ansible has permissions. This guide assumes that the directory is specified as /home/ansible/files.
i Enter the cluster installation options by selecting 3 node or 5+ node:
task choice for necessary inputs: 2
supported cluster install options:
1. 3 node (converged control/compute nodes)
2. 5+ node (3 control and 2+ compute)
enter cluster install option: 2
OpenShift 4.10 supports the 3 node and 5+ node options. If you select the 3 node option, you are not prompted for information about compute nodes. The next step shows the steps to follow if you select a 5+ node cluster installation.
The following example assumes that three control-plane nodes are set up in the cluster. Select the interface to be used for DHCP and PXE and select two interfaces to be used for bonding. If only one interface is available, choose NO.
Note: In this document, the interface that DHCP and the active bond interface use are the same.
Do you want to perform bonding (y/NO): y
select network interfaces for node etcd-0
1 -> NIC.Integrated.1-1-1
2 -> NIC.Integrated.1-2-1
3 -> NIC.Slot.2-1-1
4 -> NIC.Slot.2-2-1
5 -> NIC.Slot.3-1-1
6 -> NIC.Slot.3-2-1
Select the interface used by DHCP: 3
selected interface is: NIC.Slot.2-1-1
device NIC.Slot.2-1-1 mac address is 3C:FD:FE:BF:7E:60
1 -> NIC.Integrated.1-1-1
2 -> NIC.Integrated.1-2-1
3 -> NIC.Slot.2-1-1
4 -> NIC.Slot.2-2-1
5 -> NIC.Slot.3-1-1
6 -> NIC.Slot.3-2-1
Select the interface used by etcd-0 active bond interface: 3
selected interface is: NIC.Slot.2-1-1
1 -> NIC.Integrated.1-1-1
2 -> NIC.Integrated.1-2-1
3 -> NIC.Slot.2-1-1
4 -> NIC.Slot.2-2-1
5 -> NIC.Slot.3-1-1
6 -> NIC.Slot.3-2-1
Select the interface used by etcd-0 backup bond interface: 5 selected interface is: NIC.Slot.2-2-1
Note: This step is not necessary if you selected 3 node in sub step b.i. For this release, all compute nodes are installed with RHCOS 4.10.
Provide information relating to bonding and the interfaces that bonding uses for each compute node (for reference, see sub step b.ii).
ensure disknames are available. Otherwise OpenShift install fails
specify the control plane device that will be installed
default [nvme0n1]: sda
specify the compute node device that will be installed
default [nvme0n1]: sda
Note: This guide assumes that the sda disk (RAID in the BOSS card) is used. If necessary, Perc H755N can be used to create RAID on NVMe drives.
specify a DNS forwarder if necessary (yes/No): yes
enter the DNS forwarder IP: <DNS Forwarder IP>
specify cluster name
default [ocp]:
specify zone file
default [/var/named/ocp.zones]:
Note: If DNS Forwarder is not required, enter No.
enter http port
default [8080]:
specify dir where ignition files will be placed
directory will be created under /var/www/html
default [ignition]:
For information about the values to be specified for the pod network and the service network, see sample install-config.yaml file for bare metal. Red Hat specifies the values that are used for the Container Network Interface (CNI). Ensure that these values do not overlap with your existing network.
enter the directory where openshift installs
directory will be created under /home/core
default [openshift]:
enter the pod network cidr
default [10.128.0.0/14]:
pod network cidr: 10.128.0.0/14
specify cidr notation for number of ips in each node:
cidr number should be an integer and less than 32
default [23]:
specify the service network cidr
default [172.30.0.0/16]:
Note: To modify any values, run the related option again and correct the values.
Note: This guide uses the /home/ansible/files directory for the software bits.
vars:
pull_secret_file: pullsecret
Note: Copy the generated_inventory file from the <git clone dir>/python/ directory to <git clone dir>/ansible directory. Ensure that the pull_secret_file is copied under /home/ansible/files.
Note: If there is no secondary management, use the ocp.yaml playbook file under <git clone dir>/ansible (see this sample file for guidance). In case of single node deployment, use haocp.yaml or ocp.yaml inside <git clone dir>/sno-ansible.
[ansible@csah-pri ansible] $ pwd
/home/ansible/openshift-bare-metal/ansible
[ansible@csah-pri ansible] $ ansible-playbook -i generated_inventory haocp.yaml
The primary CSAH node is installed and configured with HTTP, HAProxy, DHCP, DNS, and PXE services.
Note: HTTP, HAProxy, and DNS services are configured for a secondary CSAH node. The Keepalived service is configured in both primary/secondary only when there is a secondary CSAH node.
In addition, the install-config.yaml file is generated, and the ignition config files are created and made available over HTTP.