The section contains requirements for the Bare Metal Orchestrator nodes, including: software, user account access requirements, and K8s cluster requirements.
When you install Bare Metal Orchestrator, an RKE2 cluster is automatically deployed.
All nodes in the Bare Metal Orchestrator cluster must be running a Ubuntu 20.04 LTS or a Red Hat Enterprise Linux 8.6 environment and all nodes must have the same Linux user account configured.
For more information about storage requirements, see Storage requirements.
Global Controller software and node requirements for a single node cluster for Ubuntu
Before you deploy Bare Metal Orchestrator in a Ubuntu environment, set up the VM hosting the Global Controller as described in the following table:
Item | Details |
Operating system | The VM hosting the Global Controller node must have a Ubuntu 20.04 Linux environment. SSH must be enabled and ensure the following Linux utilities are installed and running on the node: jq, coreutils, mktemp, openiscsi, curl, findmnt, grep, awk, blkid, and lsblk. |
Set up the OS partition | During the operating system installation, create the following partition: /dev/sda—300 GB partition 1 set up on the primary SSD disk. See Storage requirements. |
hostname | Record the hostname of the servers. In this guide, bmo-manager-1 is used for the Global Controller hostname, but you will supply your own. |
NTP and Python 3 | Make sure these applications are installed and running. If not, run sudo apt-get install ntp python3 . |
Edit the kernel configuration | Set the default virtual memory limit of the server hosting the Global Controller node to 262114 in the sysctl.conf file and make it persistent. Run the following to change the kernel configuration, and then save the sysctl.conf file: Run CAUTION: If the virtual memory is not properly configured on the Global Controller (GC) node, Bare Metal Orchestrator logs do not display in the OpenSearch dashboard and the GC site goes into the failed state. |
Install make | Run the following commands on the Global Controller node:
|
Install Docker | Install Docker version 20.10.11, see Install Docker Engine on Ubuntu. Note: You must manually add the common installer username to the Docker group. For example: sudo adduser <username> docker |
Global Controller software and node requirements for a single node cluster for Red Hat Enterprise Linux
Before you deploy Bare Metal Orchestrator in a Red Hat Enterprise Linux environment, set up the VM hosting the Global Controller as described in the following table:
Item | Details |
Operating system | The VM hosting the Global Controller node must have a Red Hat Enterprise Linux 8.6 environment. SSH must be enabled and ensure the following Linux utilities are installed and running on the node: jq, coreutils, mktemp, openiscsi, curl, findmnt, grep, awk, blkid, and lsblk. |
Enable Red Hat Enterprise Linux subscription | Run subscription-manager register# enter your RHEL credentials as root user. |
Install packages | Run the following command as root user: |
Disable firewall | Run these commands as root user: |
Install Docker | Install Docker version 20.10.11. Note: You must manually add the common installer username to the Docker group. For example: sudo adduser <dell> docker |
Edit the kernel configuration | Set the default virtual memory limit of the servers hosting the Global Controller node to 262114 in the sysctl.conf file and make it persistent. Run the following to change the kernel configuration, and then save the sysctl.conf file: Run CAUTION: If the virtual memory is not properly configured on the Global Controller (GC) node, Bare Metal Orchestrator logs do not display in the OpenSearch dashboard and the GC site goes into the failed state. |
Install software requirements | You must install iscsid. Run the following commands to install iscsi: |
Global Controller (CP1) and redundant HA nodes (CP2 and CP3) software and node requirements for an HA cluster for Ubuntu
Ensure that the VMs hosting the two redundant HA nodes and Load Balancers are reachable from the Global Controller host over the network. From the Global Controller host, you should be able to ssh <user>@<ip_address>
to each of the four VMs, where <ip_address>
is the IP address of the VM. For more information about network requirements, see Network requirements.
The VMs hosting CP1, CP2, and CP3 must be accessible over the network using the root account.
Before you deploy Bare Metal Orchestrator in a Ubuntu environment, set up the VM hosting the Global Controller (CP1), and the two redundant HA nodes (CP2 and CP3) as follows:
Item | Details |
Operating system | The VMs hosting the nodes (Global Controller CP1, CP2, and CP3) must have a Ubuntu 20.04 Linux environment. SSH must be enabled and ensure the following Linux utilities are installed and running on the CP1, CP2, and CP3 nodes: jq, coreutils, mktemp, openiscsi, curl, findmnt, grep, awk, blkid, and lsblk. |
Set up the OS installation | During the operating system installation, create the following partition: /dev/sda—300 GB partition 1 set up on the primary SSD disk. See Storage requirements. |
Install software requirements | You must install iscsid on CP1, CP2, and CP3. Run the following commands to install iscsi on Ubuntu: |
hostname | Record the hostname of the servers. In this guide, the following example hostnames are used, but you will supply your own:
All Bare Metal Orchestrator nodes must have a unique hostname. You can run the following to change the hostname: hostnamectl set-hostname <new-hostname> |
NTP and Python 3 | Make sure these applications are installed and running. If not, run sudo apt-get install ntp python3 . |
Edit the kernel configuration | Set the default virtual memory limit of the servers hosting the Global Controller node to 262114 in the sysctl.conf file and make it persistent. Run the following to change the kernel configuration, and then save the sysctl.conf file: Run CAUTION: If the virtual memory is not properly configured on the CP1, CP2, and CP3 nodes, Bare Metal Orchestrator logs do not display in the OpenSearch dashboard and the GC site goes into the failed state. |
Set up a partition | The following partition must be setup: /dev/sdb—500 GB partition 2 (non-boot) setup on the secondary SSD disk. See Storage requirements. |
Configure multipath service | Do the following to prevent the multipath service from adding additional block devices created by the internal distributed storage handler. Check devices created by the internal storage handler using lsblk. Note: The storage device names start with /dev/sd[x]. Create the default configuration file /etc/multipath.conf if it does not exist. Add the following line to the blacklist section: Run Run |
Install make | Run the following commands on the Global Controller (CP1) node only:
|
Install Docker | Install Docker version 20.10.11, see Install Docker Engine on Ubuntu. Note: You must manually add the common installer username to the Docker group. For example: sudo adduser <username> docker |
Global Controller (CP1) and redundant HA nodes (CP2 and CP3) software and node requirements for an HA cluster for Red Hat Enterprise Linux
Ensure that the VMs hosting the two redundant HA nodes and Load Balancers are reachable from the Global Controller host over the network. From the Global Controller host, you should be able to ssh <user>@<ip_address>
to each of the four VMs, where <ip_address>
is the IP address of the VM. For more information about network requirements, see Network requirements.
The VMs hosting CP1, CP2, and CP3 must be accessible over the network using the root account.
Before you deploy Bare Metal Orchestrator in a Red Hat Enterprise Linux environments, set up the VM hosting the Global Controller (CP1), and the two redundant HA nodes (CP2 and CP3) as follows:
Item | Details |
Operating system | The VM hosting the Global Controller node must have a Red Hat Enterprise Linux 8.6 environment. SSH must be enabled and ensure the following Linux utilities are installed and running on the node: jq, coreutils, mktemp, openiscsi, curl, findmnt, grep, awk, blkid, and lsblk. |
Enable Red Hat Enterprise Linux subscription | Run subscription-manager register# enter your RHEL credentials as root user. |
Install packages | Run these commands as root user after registering the subscription in all CP1, CP2, CP3, worker and Load Balancer nodes: |
Disable firewall | Run these commands as root user: |
Install Docker on CP1 | Install Docker version 20.10.11. Note: You must manually add the common installer username to the Docker group. For example: sudo adduser <username> docker |
Edit the kernel | Set the default virtual memory limit of the servers hosting the Global Controller node to 262114 in the sysctl.conf file and make it persistent. Run the following to change the kernel configuration, and then save the sysctl.conf file: Run CAUTION: If the virtual memory is not properly configured on the CP1, CP2, and CP3 nodes, Bare Metal Orchestrator logs do not display in the OpenSearch dashboard and the GC site goes into the failed state. |
Install iscsi | You must install iscsid. Run the following commands to install iscsi: |
Load Balancer software and node requirements for an HA node cluster for Ubuntu
Ensure the VMs that host the two redundant Load Balancers are reachable from the Global Controller host over the network. From the Global Controller host, you should be able to ssh root@<ip_address>
to each of the VMs, where <ip_address>
is the IP address of the VM.
The VMs hosting the Load Balancers must be accessible over the network using the root account.
Set up both Load Balancer VMs for Ubuntu environments as described in the following table:
Item | Details |
Operating system | The VMs hosting the two Load Balancers must have a Ubuntu 20.04 Linux environment. |
hostnames | Record the hostnames of the servers used to host the two Load Balancers. In this guide, example hostnames bmo-manager-lb-1 and bmo-manager-lb-2 are used, respectively. However, you will supply your own. All Bare Metal Orchestrator nodes must have a unique hostname. You can run the following to change the hostname:
|
NTP and python 3 | Make sure these applications are installed and running. If not, run apt-get install ntp python3 . |
Install NGINX, then stop NGINX and disable the process | Run the following command: Then run the following commands to stop and disable the NGINX web server: |
Load Balancer software and node requirements for an HA node cluster for Red Hat Enterprise Linux
Ensure the VMs that host the two redundant Load Balancers are reachable from the Global Controller host over the network. From the Global Controller host, you should be able to ssh root@<ip_address>
to each of the VMs, where <ip_address>
is the IP address of the VM.
The VMs hosting the Load Balancers must be accessible over the network using the root account.
Set up both Load Balancer VMs for Red Hat Enterprise Linux environments as described in the following table:
Item | Details |
Operating system | The VMs hosting the two Load Balancers must have a Red Hat Enterprise Linux 8.6 environment. |
Enable Red Hat Enterprise Linux subscription | Run subscription-manager register# enter your RHEL credentials as root user. |
Install packages | Run the following command as root user: |
Disable Firewall | Run these commands as root user: |
Install NGINX | Run the following command: |
Worker node requirements for Ubuntu
Ensure that worker nodes are accessible from the Global Controller (CP1) using the common installer user account.
To manage a server at the remote site, the network that the server is on must be routable to the primary network of the worker node or be routable to the primary network of the Global Controller site.
The following table lists the worker node requirements for Ubuntu environments:
Software | Supported versions and requirements |
Linux system distribution | Ubuntu 20.04 LTS |
NTP | Install NTP on the worker node server before adding the node to the Bare Metal Orchestrator cluster. Run |
Worker node requirements for Red Hat Enterprise Linux
Ensure that worker node servers and virtual machines are accessible over the network using the root account.
To manage a server at the remote site, the network that the server is on must be routable to the primary network of the worker node or be routable to the primary network of the Global Controller site.
The following table lists the worker node requirements for Red Hat Enterprise Linux environments:
Software | Supported versions and requirements |
Linux systemd distribution | Red Hat Enterprise Linux 8.6 |
Enable Red Hat Enterprise Linux subscription | Run subscription-manager regsiter# enter your RHEL credentials as root user. |
Install packages | Run the following command as root user: |
Disable firewall | Run these commands as root user: |
Edit the kernel configuration | Set the default virtual memory limit of the servers hosting the Global Controller node to 262114 in the sysctl.conf file and make it persistent. Run the following to change the kernel configuration, and then save the sysctl.conf file: Run |
Node access account requirements
We suggest creating a common user account called installer to align with the example login account name used in this guide. However, you can assign your own. The required step to update the all.yml file Ansible user account name is included in the Bare Metal Orchestrator installation procedure.
Use this common user account for the following tasks:
- Initial Bare Metal Orchestrator installation
- Bare Metal Orchestrator node configuration (including worker nodes)
- Uninstalling and reinstalling the Bare Metal Orchestrator cluster.
For single node deployments, create this common user account on the VMs hosting the Global Controller node and all remote worker nodes. Ensure the common user account complies with the following:
- Passwordless sudo privileges are enabled for the duration of the installation and Bare Metal Orchestrator node configuration, including worker nodes.
- All worker nodes must have the same password.
For high availability (HA) deployments, ensure all nodes in the HA cluster have the same Linux user account configured (for example, installer.) The same user account and privileges must be configured on each server hosting the following Bare Metal Orchestrator nodes:
- Global Controller (CP1) and the two redundant HA nodes (CP2 and CP3)
- The two Load Balancers
- All worker nodes
Common (installer) user requirements for HA deployments:
- Passwordless sudo privileges are enabled for the duration of the installation and Bare Metal Orchestrator node configuration, including worker nodes.
- CP1, CP2, and CP3 nodes must have the same password.
- All Load Balancer nodes must have the same password.
- All worker nodes must have the same password.
Ensure that the same user-defined account is configured on all nodes and have passwordless sudo privileges enabled for the duration of the deployment. You must manually add the common installer username to the Docker group on the Global Controller (CP1), and on CP2 and CP3 nodes. For example:
sudo adduser <username> docker
where <username>
is the name of the common user account.
The sudoers configuration is updated at /etc/sudoers.
Cmnd alias
specification string into the sudoers file, you must ensure that you remove spaces between each character on each line. If spaces are present, the file corrupts. The following is an example of the sudoers file:
installer@bmo-manager-1:~/mw_bundle$ sudo cat /etc/sudoers
#
# This file MUST be edited with the 'visudo' command as root.
#
# Please consider adding local content in /etc/sudoers.d/ instead of
# directly modifying this file.
#
# See the man page for details on how to write a sudoers file.
#
Defaults env_reset
Defaults mail_badpass
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"
# Host alias specification
# User alias specification
# Cmnd alias specification
Cmnd_Alias BIN=/bin/sh,/var/lib/rancher/rke2/bin/crictl,/usr/bin/systemctl,/usr/sbin/lvm,/usr/bin/mkdir,/usr/bin/touch,/usr/bin/tee,/usr/bin/sed,/usr/bin/umount,/usr/bin/mount,/usr/bin/rmdir,/usr/sbin/mkfs.xfs,/usr/sbin/lvs,/usr/sbin/pvcreate,/usr/sbin/pvremove,/usr/sbin/vgcreate,/usr/sbin/vgdisplay,/usr/sbin/vgremove,/usr/sbin/lvcreate,/usr/sbin/lvremove,/usr/bin/awk,/usr/bin/chown,/usr/bin/chmod,/usr/bin/echo,/usr/bin/cat,/usr/bin/cp,/usr/bin/rm,/bin/systemctl,/bin/mkdir,/bin/sed,/bin/umount,/bin/rmdir,/sbin/mkfs.xfs,/bin/chown,/bin/chmod,/bin/echo,/bin/cat,/bin/cp,/bin/rm, /usr/bin/docker,/usr/local/bin/helm
root ALL=(ALL:ALL) ALL
# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL
# Allow members of group sudo to execute any command
%sudo ALL=(ALL:ALL) ALL
installer ALL=NOPASSWD: BIN
# See sudoers(5) for more information on "#include" directives:
#includedir /etc/sudoers.d