Home > Workload Solutions > Container Platforms > Red Hat OpenShift Container Platform > Guides > Implementation Guide—Red Hat OpenShift Container Platform 4.14 on AMD-powered Dell Infrastructure > IPI-based deployment
IPI deploys and configures the infrastructure on which an OpenShift Container Platform cluster runs. The CSAH node is used as the provisioner node for the deployment. The node runs the installation program and hosts the bootstrap VM that is required to deploy the OpenShift cluster.
Perform the steps in Preparing the CSAH node. Then:
# Create bridge interface
nmcli connection add type bridge ifname baremetal con-name baremetal
# Create bond interface with bridge baremetal as master
nmcli connection add type bond con-name bond0 ifname bond0 bond.options "lacp_rate=1,miimon=100,mode=802.3ad,xmit_hash_policy=layer3+4" ipv4.method disabled ipv6.method ignore master baremetal
# Add slaves to bond interfaces
nmcli connection add type ethernet con-name bond-slave-0 ifname eno12399 master bond0 slave-type bond
nmcli connection add type ethernet con-name bond-slave-1 ifname ens2f0 master bond0 slave-type bond
# Set IP Address to baremetal interface
nmcli connection modify baremetal ipv4.method manual ipv4.addresses 192.168.35.62/24 connection.autoconnect yes ipv4.gateway 192.168.35.1 ipv4.dns 192.168.36.51 ipv4.dns-search dcws.lab
Note: Use "baremetal" as the bridge connection name and interface name.
Note: Ensure that you modify only values in the file. Keys must always remain the same.
[ansible@csah ~]$ cd <git clone dir>/openshift-bare-metal/python
[ansible@csah python]$ python3 generate_inventory_file.py
–-run --id_user <idrac user> --id_pass <idrac password> –-nodes nodes_ipi.yaml
Note: If the iDRAC user and password are the same across all the control-plane and compute nodes, run the program using the --id_user and --id_pass arguments.
Is there a backup management node [yes/No]: No
installation type:
1. UPI
2. IPI
3. Assisted Installer
enter install type: 2
1: cluster install
2: infra components
3: review inventory file
4: generate inventory file
5: exit
task choice for necessary inputs:
supported cluster install options:
1. Standard - 5+ node (3 control and 2+ compute)
2. Compact - 3 node (converged control/compute nodes)
enter cluster install option: 2
option selected: 2
Checking iDRAC connectivity for control nodes.
etcd0 iDRAC is reachable.
etcd1 iDRAC is reachable.
etcd2 iDRAC is reachable.
press any key to continue
The next set of questions relates to network bonding on the cluster nodes.
Do you want to perform bonding for ‘control_nodes’ (y/No): y
select network interfaces for node etcd0
1 -> NIC.Integrated.1-1-1 [LinkUp]
2 -> NIC.Integrated.1-2-1 [LinkDown]
3 -> NIC.Slot.1-1-1 [LinkUp]
4 -> NIC.Slot.1-2-1 [LinkDown]
Select the interface used by etcd0 first bond interface: 1
selected interface is: NIC.Integrated.1-1-1
Select the interface used by etcd0 second bond interface: 3
selected interface is: NIC.Slot.1-1-1
For each question, a prompt is displayed for all the control-plane and compute nodes.
Note: Control planes have similar hardware, so the disk names are the same for all three control-plane nodes. A different disk name is selectable for each compute node.
Note: A virtual API IP address provides an endpoint for all users to interact with and configure the platform. A virtual ingress IP address provides an endpoint for application traffic flowing from outside the cluster.
ensure disknames are available. Otherwise OpenShift install fails
Enter the installation disk name (Example - /dev/sda or /dev/nvme0n1) for control plane nodes
default [/dev/nvme0n1]:
enter the network type (OVNKubernetes / OpenShiftSDN) : OVNKubernetes
specify the network cidr of the external network (format x.x.x.x/x): 192.168.35.0/24
adding machine_network_cidr: 192.168.35.0/24
enter API virtual IP: 192.168.35.92
adding api_ip: 192.168.35.92
enter ingress virtual IP: 192.168.35.93
adding wildcard_ip: 192.168.35.93
enter pullsecret file location
default [/home/ansible/files/pullsecret]:
The task for infra components lets you configure a DNS server on the CSAH node or use an existing DNS in the network.
Do you want to install DNS on CSAH [yes/No]: yes
specify a DNS forwarder if necessary (yes/No): yes
enter the DNS forwarder IP: 10.8.8.8
specify cluster name
default [ocp]: ipi
specify zone file
default [/var/named/ipi.zones]:
The program creates two files in the <git clone dir>/openshift-bare-metal/python directory: generated_inventory and ansible.yaml.
[ansible@csah ansible] $ pwd
/home/ansible/openshift-bare-metal/ansible
[ansible@csah ansible] $ ansible-playbook -i generated_inventory ansible.yaml
/usr/local/bin/openshift-baremetal-install --dir /home/kni/clusterconfigs --log-level debug create cluster
tail -f /home/kni/clusterconfigs/.openshift_install.log
oc get nodes
The command produced the following sample output:
oc get nodes
NAME STATUS ROLES AGE VERSION
etcd0 Ready control-plane,master,worker 5d v1.27.10+28ed2d7
etcd1 Ready control-plane,master,worker 5d v1.27.10+28ed2d7
etcd2 Ready control-plane,master,worker 5d v1.27.10+28ed2d7
You can scale up an existing OpenShift cluster by adding more compute nodes.
Ensure that:
If DNS is hosted on the CSAH, update the /var/named/<zone> file and restart named service by running the following command:
systemctl restart named
To expand an existing cluster:
oc -n openshift-machine-api create -f <name of yaml file>
The file creates two secrets. The first secret contains the NMState config, and the second contains the BMC credentials. These secrets are referenced in the BMC resource.
The node is booted from the live CoreOS. The state progresses from Registering to Inspecting to Available.
[root@csah ~]# oc get bmh -n openshift-machine-api
The following code excerpt is sample output from the command:
[root@csah ~]# oc get bmh -n openshift-machine-api
NAME STATE CONSUMER ONLINE ERROR AGE
compute0 available true 14m
etcd0 externally provisioned ipi414-qzn4q-master-0 true 12h
etcd1 externally provisioned ipi414-qzn4q-master-1 true 12h
etcd2 externally provisioned ipi414-qzn4q-master-2 true 12h
[root@csah ~]# oc get machinesets -n openshift-machine-api
The following is sample output from the command:
[root@csah ~]# oc get machinesets -n openshift-machine-api
NAME DESIRED CURRENT READY AVAILABLE AGE
ipi414-qzn4q-worker-0 0 0 12h
root@csah ~]# oc scale machinesets ipi414-qzn4q-worker-0 -n openshift-machine-api --replicas 2
[root@csah ~]# oc get bmh -n openshift-machine-api
The following is sample output from the command:
[root@csah ~]# oc get bmh -n openshift-machine-api
NAME STATE CONSUMER ONLINE ERROR AGE
compute0 provisioned ipi414-qzn4q -worker-0-579cn true 25m
etcd0 externally provisioned ipi414-qzn4q -master-0 true 17h
etcd1 externally provisioned ipi414-qzn4q -master-1 true 17h
etcd2 externally provisioned ipi414-qzn4q -master-2 true 17h
The nodes progress from Available to Provisioning to Provisioned.
oc get csr -o name | xargs oc adm certificate approve
oc get nodes,co