The install bundle is a .tar.gz compressed file. You must extract the install bundle and configure some initial Bare Metal Orchestrator cluster settings on the host server before you install the single node Bare Metal Orchestrator cluster.
- The Bare Metal Orchestrator installation bundle is already downloaded and saved to a location on the server where the Global Controller node is to be deployed, see Download the installation bundle.
- You need the hostname of the VM you plan to use for the Global Controller node.
- Networking is configured on the server hosting the Global Controller node, see Network requirements.
- If DHCP auto-discovery is used on the server hosting the Global Controller node, you must configure a secondary interface for DHCP discovery before installing Bare Metal Orchestrator, see Configure a secondary interface for DHCP auto-discovery.
- A user account with passwordless sudo privileges configured for the duration of the installation, see Enable and disable passwordless sudo privileges.
Note: We recommend revoking passwordless sudo privileges for the common user account after the Bare Metal Orchestrator installation is complete and the cluster nodes are configured, including worker nodes.
CAUTION: A server to be onboarded that has an IP address within the reserved subnet 10.42.0.0/16 and 10.43.0.0/16 of Bare Metal Orchestrator will fail.
Perform this procedure on the Global Controller node to prepare the server to host the Bare Metal Orchestrator cluster, and then deploy the installation bundle.
A Docker image of the K8s components is installed first, then the platform components, and finally the Dell Technologies Bare Metal Orchestrator components, the Global Controller, and remote sites.
Note: For real-world deployments, we recommend installing a high availability (HA) Bare Metal Orchestrator cluster. A single node cluster cannot be upgraded to an HA cluster.
- Log in to the server that will host the Global Controller.
- Extract the install bundle tar.gz install file. For example:
$tar -xvzf mw_bundle-v2.2.0.tar.gz
where mw_bundle-v2.2.0.tar.gz is a sample name of the install bundle. Files extract to the mw_bundle directory.
- Change to the /home/<user>/mw_bundle directory and proceed to set up the cluster configuration.
cd /home/<user>/mw_bundle
- Change directory to inventory/my-cluster/.
- Update the IP address of the Global Controller node in inventory/my-cluster/hosts.ini and optionally add worker node IP addresses.
vi hosts.ini
The following is an example hosts.ini file, where the example hostname bmo-manager-1 is used for the Global Controller node. Alternatively, enter the private IP address of the VM hosting the Global Controller.
IP address variables to be replaced with actual IP addresses are identified with angle brackets. For example: <bmo-manager-1 IP>
.
Note: For single node installation, only global_controller, node, and hosts are required.
[global_controller]
<bmo-manager-1 IP>
[ha]
[loadbalancer]
[secondary_ip] ;; This section is optional.
; <bmo-manager-1 secondary interface IP> ;; set for single node and HA cluster
; <bmo-manager-2 secondary interface IP> ;; set for HA cluster.
; <bmo-manager-3 secondary onterface IP> ;; set for HA cluster.
[node]
<Worker node 1 IP>
<Worker node 2 IP>
[node-remove]
<Worker node 3 IP>
<Worker node 4 IP>
[hosts]
<bmo-manager-1 IP> ansible_python_interpreter=/usr/bin/python3
<Worker node 1 IP> ansible_python_interpreter=/usr/bin/python3
<Worker node 2 IP> ansible_python_interpreter=/usr/bin/python3
- Edit the inventory/my-cluster/group_vars/all.yml file, and then save the file. Do the following:
cd inventory/my-cluster/group_vars
vi all.yml
- Change the ansible_user: dell attribute to ansible_user: installer for the recommended installer user or to whatever the common user account is that was created on all Bare Metal Orchestrator nodes for the purposes of installing the cluster.
- Update the
storage_volume
attribute as shown in the example. The following shows a snippet from an example all.yml file. For a complete sample all.yml file, see Sample all.yml.
# set the partition name in which the volumes will be created for longhorn
# e.g. /dev/sdb1
storage_volume: ""
This is also where you can configure Bare Metal Orchestrator to refer to your own partitions if they differ from the examples used in this guide.
Note: Optionally, if a docker-based external registry is used with Bare Metal Orchestrator, update the external_registry attribute with the ipaddress:port
of the external registry. If the value is left blank, internal registry is used.
- Ensure that the enable_longhorn attribute is set to false (default).
For example:
#longhorn info
enable_longhorn: false
- Optional: Change the default Bare Metal Orchestrator hostname in the inventory/my-cluster/group_vars/all.yml file to a hostname of your choice.
The following is an example of the hostname entry in the all.yml file, where the default hostname for Bare Metal Orchestrator is bmo-globalcontroller.
# If the default setting is used, then access BMO web UI from a web browser using
# https://bmo-globalcontroller
keycloak_access_hostname: "bmo-globalcontroller"
If you use a corporate DNS server, then you must enter the FQDN and not just the hostname. For more information, see Updating the Bare Metal Orchestrator hostname and web UI access.
Note: If you are not using FQDN and are using a web browser from a different host (such as a Windows PC), you must access the system using
https://bmo-globalcontroller. However, the host running the web browser does not understand a web address. You must add the
Bare Metal Orchestrator hostname and related IP address to your local hosts file. For instructions, see
Updating the Bare Metal Orchestrator hostname and web UI access.
- Optional: Edit the default cluster CIDR values in the all.yml file if required. For more information, see Change default CIDR subnets for Bare Metal Orchestrator.
For example:
# Uncomment this for IPv4/IPv6 network CIDRs to be used for pod IPs (default: 10.42.0.0/16)
#cluster_cidr: "172.28.0.0/16"
# Uncomment this for IPv4/IPv6 network CIDRs to be used for service IPs (default: 10.43.0.0/16)
#service_cidr: "172.27.0.0/16"
- Run the following commands and provide the server passwords as prompted:
cd /home/<user>/mw_bundle
./mw-install install -i
Press Enter to see each page of the EULA. When prompted, enter y to accept the EULA. After accepting the EULA, you are prompted for the following passwords:
- PASSWORD_GC—The password of the common Linux user account configured for the Global Controller (CP1) host server.
- PASSWORD_HA—Skip this for single node deployments. This is the password of the common Linux user account configured for the two HA (CP2 and CP3) host servers for high availability (HA) deployments.
- PASSWORD_WORKER—The password of the common Linux user account configured for worker node host servers.
- PASSWORD_LB—Skip this for single node deployments. This is the password of the common Linux user account configured for the two Load Balancer host servers for high availability (HA) deployments.
- PASSWORD_KEYCLOAK—Create an initial Identity and Access Management (IAM) admin user password.
- PASSWORD_BACKUP—Optionally, enter the password for the external backup storage location.
- PASSWORD_OPENSEARCH—Optionally, enter the password for the OpenSearch dashboard.
Note: For the initial IAM admin user password that you define in this step, if you use special characters, you must enclose the password in single quotes. The following special characters are supported: !@_-*
Log files are available in mw-bundle/logs.
The RKE2 cluster, Bare Metal Orchestrator platform components, and the Bare Metal Orchestrator cluster are installed. The installer also automatically creates a sample kubeconfig file for the initial admin user in /etc/rancher/rke2/config_admin.yaml.
Do the following: Configure the Global Controller site
If the cluster installation fails or you need to reinstall the cluster, you can uninstall the cluster and then redeploy it, see Uninstall and redeploy Global Controller and HA nodes.