The section describes the software requirements for nodes in a Bare Metal Orchestrator high availability cluster in the Ubuntu environment.
All nodes in the Bare Metal Orchestrator cluster must have the same Linux environment, either Ubuntu 20.04 LTS or Red Hat Enterprise Linux 8.6. Ensure that the same Linux user account is configured on all nodes.
Ensure that the VMs hosting the two redundant HA nodes (CP2 and CP3) and Load Balancers are reachable from the Global Controller (CP1) host over the network. From the Global Controller host, you should be able to ssh <user>@<ip_address>
to each of the four VMs, where <ip_address>
is the IP address of the VM. For more information about network requirements, see Network requirements.
The VMs hosting CP1, CP2, and CP3 must be accessible over the network using the root account.
Global Controller (CP1) and HA nodes (CP2 and CP3) software requirements for Ubuntu environments
Before you deploy Bare Metal Orchestrator in a Ubuntu environment, set up the VM hosting the Global Controller (CP1), and the two redundant HA nodes (CP2 and CP3) as follows:
Item | Details |
Operating system | The VMs hosting the nodes (Global Controller CP1, CP2, and CP3) must have a Ubuntu 20.04 Linux environment. SSH must be enabled and ensure the following Linux utilities are installed and running on the CP1, CP2, and CP3 nodes: jq, coreutils, mktemp, openiscsi, curl, findmnt, grep, awk, blkid, and lsblk. |
Set up the OS installation | During the operating system installation, create the following partition: /dev/sda—300 GB partition 1 set up on the primary SSD disk. See Storage requirements. |
Install software requirements | You must install iscsid on CP1, CP2, and CP3. Run the following commands to install iscsi on Ubuntu: |
hostname | Record the hostname of the servers. In this guide, the following example hostnames are used, but you will supply your own:
All Bare Metal Orchestrator nodes must have a unique hostname. You can run the following to change the hostname: hostnamectl set-hostname <new-hostname> |
NTP and Python 3 | Make sure these applications are installed and running. If not, run sudo apt-get install ntp python3 . |
Edit the kernel configuration | Set the default virtual memory limit of the servers hosting the Global Controller node to 262114 in the sysctl.conf file and make it persistent. Run the following to change the kernel configuration, and then save the sysctl.conf file: Run CAUTION: If the virtual memory is not properly configured on the CP1, CP2, and CP3 nodes, Bare Metal Orchestrator logs do not display in the OpenSearch dashboard and the GC site goes into the failed state. |
Set up a partition | The following partition must be setup: /dev/sdb—500 GB partition 2 (non-boot) setup on the secondary SSD disk. See Storage requirements. |
Configure multipath service | Do the following to prevent the multipath service from adding additional block devices created by the internal distributed storage handler. Check devices created by the internal storage handler using lsblk. Note: The storage device names start with /dev/sd[x]. Create the default configuration file /etc/multipath.conf if it does not exist. Add the following line to the blacklist section: Run Run |
Install make | Run the following commands on the Global Controller (CP1) node only:
|
Install Docker | Install Docker version 20.10.11, see Install Docker Engine on Ubuntu. Note: You must manually add the common installer username to the Docker group. For example: sudo adduser <username> docker |
Load Balancer software requirements for Ubuntu environments
Ensure the VMs that host the two redundant Load Balancers are reachable from the Global Controller host over the network. From the Global Controller host, you should be able to ssh root@<ip_address>
to each of the VMs, where <ip_address>
is the IP address of the VM.
The VMs hosting the Load Balancers must be accessible over the network using the root account.
Set up both Load Balancer VMs for Ubuntu environments as described in the following table:
Item | Details |
Operating system | The VMs hosting the two Load Balancers must have a Ubuntu 20.04 Linux environment. |
hostnames | Record the hostnames of the servers used to host the two Load Balancers. In this guide, example hostnames bmo-manager-lb-1 and bmo-manager-lb-2 are used, respectively. However, you will supply your own. All Bare Metal Orchestrator nodes must have a unique hostname. You can run the following to change the hostname:
|
NTP and python 3 | Make sure these applications are installed and running. If not, run apt-get install ntp python3 . |
Install NGINX, then stop NGINX and disable the process | Run the following command: Then run the following commands to stop and disable the NGINX web server: |
For worker node requirements, see Worker node software requirements–Ubuntu.
For storage requirements, see Storage requirements.