Home > Workload Solutions > Container Platforms > Red Hat OpenShift Container Platform > Guides > Implementation Guide—Red Hat OpenShift Container Platform 4.12 on Intel-powered Dell Infrastructure > IPI-based deployment of the cluster
IPI deploys and configures the infrastructure on which an OpenShift Container Platform cluster runs. The CSAH node is used as the provisioner node for the deployment. The node runs the installation program and hosts the bootstrap VM that is required to deploy the OpenShift cluster.
Perform the steps in Preparing the CSAH node. Then:
# Create bridge interface
nmcli connection add type bridge ifname baremetal con-name baremetal
# Create bond interface with bridge baremetal as master
nmcli connection add type bond con-name bond0 ifname bond0 bond.options "lacp_rate=1,miimon=100,mode=802.3ad,xmit_hash_policy=layer3+4" ipv4.method disabled ipv6.method ignore master baremetal
# Add slaves to bond interfaces
nmcli connection add type ethernet con-name bond-slave-0 ifname eno12399 master bond0 slave-type bond
nmcli connection add type ethernet con-name bond-slave-1 ifname eno12409 master bond0 slave-type bond
# Set IP Address to baremetal interface
nmcli connection modify baremetal ipv4.method manual ipv4.addresses 192.168.32.39/24 connection.autoconnect yes ipv4.gateway 192.168.32.1 ipv4.dns 192.168.31.50 ipv4.dns-search dcws.lab
Note: Use "baremetal" as the bridge connection name and interface name.
Note: Ensure that you modify only values in the file. Keys must always remain the same.
cd <git clone dir>/openshift-bare-metal/python
[ansible@csah-pri python]$ python3 generate_inventory_file.py
–-run --id_user <idrac user> --id_pass <idrac password> –-nodes nodes_ipi.yaml
Note: If the iDRAC user and password are the same across all control-plane and compute nodes, run the program with the --id_user and --id_pass arguments.
Is there a backup management node [yes/No]: No
installation type:
1. UPI
2. IPI
3. Assisted Installer
enter install type: 2
1: cluster install
2: infra components
3: review inventory file
4: generate inventory file
5: exit
task choice for necessary inputs:
supported cluster install options:
1. Standard - 5+ node (3 control and 2+ compute)
2. Compact - 3 node (converged control/compute nodes)
enter cluster install option: 2
option selected: 2
Checking iDRAC connectivity for control nodes.
ipi-m1 iDRAC is reachable.
ipi-m2 iDRAC is reachable.
ipi-m3 iDRAC is reachable.
press any key to continue
The next set of questions relates to network bonding on the cluster nodes.
Do you want to perform bonding for ‘control_nodes’ (y/No): y
select network interfaces for node ipi-m1
1 -> NIC.Integrated.1-1-1 [LinkUp]
2 -> NIC.Integrated.1-2-1 [LinkUp]
3 -> NIC.Slot.2-1-1 [LinkDown]
4 -> NIC.Slot.2-2-1 [LinkDown]
Select the interface used by ipi-m1 first bond interface: 1
selected interface is: NIC.Integrated.1-1-1
Select the interface used by ipi-m1 second bond interface: 2
selected interface is: NIC.Integrated.1-2-1
For each question, a prompt is displayed for all the control-plane and compute nodes.
Control planes have similar hardware, so the disk names are the same for all three control-plane nodes. A different disk name is selectable for each compute node.
Note: An API virtual IP address provides an endpoint for all users to interact with and configure the platform. An Ingress virtual IP provides an endpoint for application traffic flowing from outside the cluster.
ensure disknames are available. Otherwise OpenShift install fails
Enter the installation disk name (Example - /dev/sda or /dev/nvme0n1) for control plane nodes
default [/dev/nvme0n1]:
enter the network type (OVNKubernetes / OpenShiftSDN) : OVNKubernetes
specify the network cidr of the external network (format x.x.x.x/x): 192.168.32.0/24
adding machine_network_cidr: 192.168.32.0/24
enter API virtual IP: 192.168.32.80
adding api_ip: 192.168.32.80
enter ingress virtual IP: 192.168.32.81
adding wildcard_ip: 192.168.32.81
enter pullsecret file location
default [/home/ansible/files/pullsecret]:
The task for infra components lets you configure a DNS server on the CSAH node or use an existing DNS in the network.
Do you want to install DNS on CSAH [yes/No]: yes
specify a DNS forwarder if necessary (yes/No): yes
enter the DNS forwarder IP: 10.8.8.8
specify cluster name
default [ocp]: ipi-ans
specify zone file
default [/var/named/ipi-ans.zones]:
The program creates two files in the <git clone dir>/openshift-bare-metal/python directory: , generated_inventory and ansible.yaml. Copy both files into the <git clone dir>/openshift-bare-metal/ansible directory.
[ansible@csah-pri ansible] $ pwd
/home/ansible/openshift-bare-metal/ansible
[ansible@csah-pri ansible] $ ansible-playbook -i generated_inventory ansible.yaml
/usr/local/bin/openshift-baremetal-install --dir /home/kni/clusterconfigs --log-level debug create cluster
tail -f /home/kni/clusterconfigs/.openshift_install.log
oc get nodes
The following is sample output from the command:
NAME STATUS ROLES AGE VERSION
ipi-m1.dcws.lab Ready control-plane,master,worker 9d v1.25.10+3fe2906
ipi-m2.dcws.lab Ready control-plane,master,worker 9d v1.25.10+3fe2906
ipi-m3.dcws.lab Ready control-plane,master,worker 9d v1.25.10+3fe2906