The install bundle is a .tar.gz compressed file. You must extract the install bundle and configure some initial Bare Metal Orchestrator cluster settings on the host server before you install the Bare Metal Orchestrator high availability (HA) cluster.
The secondary storage disk /dev/sdb1 is mounted to the file path /longhorn on each of the control plane nodes (CP1, CP2, and CP3), see Configure distributed storage.
The Bare Metal Orchestrator installation bundle is already downloaded and saved to a location on the server where the Global Controller (CP1) node is to be deployed, see Download the installation bundle.
You need the hostname of the VM you plan to use for the Global Controller (CP1) node.
Networking is configured on the server hosting the Global Controller (CP1) and the two redundant HA nodes (CP2 and CP3), see Network requirements.
If DHCP auto-discovery is used on the server hosting the Global Controller (CP1) node, you must configure a secondary interface for DHCP discovery before installing Bare Metal Orchestrator, see Configure a secondary interface for DHCP auto-discovery.
Note: We recommend revoking passwordless sudo privileges for the common user account after the Bare Metal Orchestrator installation is complete and the cluster nodes are configured, including worker nodes.
CAUTION: A server to be onboarded that has an IP address within the reserved subnet 10.42.0.0/16 and 10.43.0.0/16 of Bare Metal Orchestrator will fail.
Perform this procedure on the Global Controller (CP1) node to prepare the server to host the Bare Metal Orchestrator cluster, and then deploy the installation bundle.
A Docker image of the K8s components is installed first, then the platform components, and finally the Dell Technologies Bare Metal Orchestrator components, the Global Controller, and remote sites.
The following example hostnames are used in this procedure:
bmo-manager-1 for the Global Controller node CP1
bmo-manager-2 for the redundant HA node CP2
bmo-manager-3 for the redundant HA node CP3
bmo-manager-lb-1 and bmo-manager-lb-2 for the two Load Balancer nodes
Log in to the server that will host the Global Controller (CP1).
Extract the install bundle tar.gz install file. For example:
$tar -xvzf mw_bundle-v2.2.0.tar.gz
where mw_bundle-v2.2.0.tar.gz is a sample name of the install bundle. Files extract to the mw_bundle directory.
Change directory to mw_bundle and proceed to set up the cluster configuration.
Update the IP address of the Global Controller node in inventory/my-cluster/hosts.ini and optionally add worker node IP addresses.
vi hosts.ini
The following is an example hosts.ini file for an HA deployment with internal distributed storage, where:
The GC node is bmo-manager-1.
The two redundant HA node hostnames are bmo-manager-2 and bmo-manager-3.
The load balancer hostnames in this example are bmo-manager-lb1 and bmo-manager-lb2.
IP address variables to be replaced with actual IP addresses are identified with angle brackets. For example: <bmo-manager-1 IP>.
[secondary_ip] ;; This section is optional. ; <bmo-manager-1 secondary interface IP> ;; set for single node and HA cluster ; <bmo-manager-2 secondary interface IP> ;; set for HA cluster ; <bmo-manager-3 secondary onterface IP> ;; set for HA cluster
[hosts] ;; this is IP for a five-node HA with internal storage and two worker nodes <bmo-manager-1 IP> ansible_python_interpreter=/usr/bin/python3 <bmo-manager-2 IP> ansible_python_interpreter=/usr/bin/python3 <bmo-manager-3 IP> ansible_python_interpreter=/usr/bin/python3 <bmo-manager-1b1 IP> ansible_python_interpreter=/usr/bin/python3 <bmo-manager-1b2 IP> ansible_python_interpreter=/usr/bin/python3 <Worker node 1 IP> ansible_python_interpreter=/usr/bin/python3 <Worker node 2 IP> ansible_python_interpreter=/usr/bin/python3
Log in to the VM that is hosting the Load Balancer node and update the /etc/hosts file on the Load Balancer as shown. Do this for each Load Balancer node.
For the Global Controller (CP1) and each of the two redundant HA nodes (CP2 and CP3), log in to the VM that is hosting the node and update the /etc/hosts file to add the IP address of the node.
The following is an example entry added to the /etc/hosts file on CP1.
## this is an example /etc/hosts for bmo-manager-1 127.0.0.1 localhost 127.0.1.1 bmo-manager-1 <bmo-manager-1 IP> localregistry.io
The following is an example entry added to the /etc/hosts file on CP2.
## this is an example /etc/hosts for bmo-manager-2 127.0.0.1 localhost 127.0.1.1 bmo-manager-2 <bmo-manager-2 IP> localregistry.io
The following is an example entry added to the /etc/hosts file on CP3.
## this is an example /etc/hosts for bmo-manager-3 127.0.0.1 localhost 127.0.1.1 bmo-manager-3 <bmo-manager-3 IP> localregistry.io
Edit the inventory/my-cluster/group_vars/all.yml file, and then save the file. Do the following:
cd inventory/my-cluster/group_vars vi all.yml
Change the ansible_user: dell attribute to ansible_user: installer for the recommended installer user or to whatever the common user account is that was created on all Bare Metal Orchestrator nodes for the purposes of installing the cluster.
Edit the HA section of the inventory/my-cluster/group_vars/all.yml file as follows:
Set the rke2_ha_mode attribute to true.
Uncomment the lines needed to deploy a multi-node control plane.
Add the Load Balancer virtual IP (VIP) address.
Optionally, if a docker-based external registry is used with Bare Metal Orchestrator, update the external_registry attribute with the ipaddress:port of the external registry. If the value is left blank, internal registry is used.
The following are example excerpts of the HA section of the all.yml file that is set for a high availability configuration. For a complete sample all.yml file, see Sample all.yml.
Note: For high availability deployments, you can add any free IP address as the lb_vip_ip attribute.
Note: The lb_vip_ip address will be used later to access the Bare Metal Orchestrator HA web user interface.
# Deploy the cluster in HA mode rke2_ha_mode: false # Uncomment values to deploy multi node control-plane after setting rke2_ha_mode: true #ha_worker_ip: "{{ hostvars[groups]['ha'] | default(groups['ha']) }}" #lb_ip_1: "{{ hostvars[groups['loadbalancer'][0]]['ansible_host'] | default(groups['loadbalancer'][0]) }}" #lb_ip_2: "{{ hostvars[groups['loadbalancer'][1]]['ansible_host'] | default(groups['loadbalancer'][1]) }}" # Uncomment and set the hostname of the loadbalancers #lb_hostname_1: "" #lb_hostname_2: "" #lb_vip_ip: ""
Note: The storage partition name is /dev/sdb1 by default. The same storage partition name must be set on each HA node (CP2 and CP3) to match the Global Controller (CP1).
This is also where you can configure Bare Metal Orchestrator to refer to your own partitions if they differ from the examples used in this guide.
Update the storage mount path as follows:
#longhorn mount path storage_mount_path: "/longhorn/"
Set the enable_longhorn attribute to true.
For example:
#longhorn info enable_longhorn: true
Optional: Change the default Bare Metal Orchestrator hostname in the inventory/my-cluster/group_vars/all.yml file to a hostname of your choice.
The following is an example of the hostname entry in the all.yml file, where the default hostname for Bare Metal Orchestrator is bmo-globalcontroller.
# If the default setting is used, then access the web UI from a web browser using # https://bmo-globalcontroller keycloak_access_hostname: "bmo-globalcontroller"
Note: If you are not using FQDN and are using a web browser from a different host (such as a Windows PC), you must access the system using https://bmo-globalcontroller. However, the host running the web browser does not understand a web address. You must add the Bare Metal Orchestrator hostname and related IP address to your local hosts file. For instructions, see Updating the Bare Metal Orchestrator hostname and web UI access.
Set the keepalive_vrrp_id attribute range as follows if there are multiple deployments in the same subnet:
keepalive_vrrp_id:[1-255]
The default value is 151. If there are multiple clusters in the same subnet, you must change the value.
# Uncomment this for IPv4/IPv6 network CIDRs to be used for pod IPs (default: 10.42.0.0/16) #cluster_cidr: "172.28.0.0/16"
# Uncomment this for IPv4/IPv6 network CIDRs to be used for service IPs (default: 10.43.0.0/16) #service_cidr: "172.27.0.0/16"
Run the lsblk command and confirm the correct mounted partition assignments on the Global Controller (CP1), and on the two redundant HA nodes (CP2 and CP3).
The following is an example of mounted partition assignments, where sda is the first partition with sda1, sda2, and sda5. The second partition is sdb.
The sda device shown here is just an example. Your implementation may be different. However, sdb must match the example shown.
Run the following commands and provide the server passwords as prompted:
cd /home/<user>/mw_bundle ./mw-install install -i
Press Enter to see each page of the EULA. When prompted, enter y to accept the EULA. After accepting the EULA, you are prompted for the following passwords:
PASSWORD_GC—The password of the common Linux user account configured for the Global Controller (CP1) host server.
PASSWORD_HA—The password of the common Linux user account configured for the two HA (CP2 and CP3) host servers.
PASSWORD_WORKER—The password of the common Linux user account configured for worker node host servers.
PASSWORD_LB—The password of the common Linux user account configured for the two Load Balancer host servers.
PASSWORD_KEYCLOAK—Create an initial Identity and Access Management (IAM) admin user password.
PASSWORD_BACKUP—Optionally, enter the password for the external backup storage location.
PASSWORD_OPENSEARCH—Optionally, enter the password for the OpenSearch dashboard.
Note: For the initial IAM admin user password that you define in this step, if you use special characters, you must enclose the password in single quotes. The following special characters are supported: !@_-*
Log files are available in mw-bundle/logs.
The RKE2 cluster, Bare Metal Orchestrator platform components, and the Bare Metal Orchestrator cluster are installed.
The installer automatically creates a sample kubeconfig file for an initial admin user in /etc/rancher/rke2/config_admin.yaml.