Deploy Bare Metal Orchestrator on AWS EC2. The AWS EC2 load balancer replaces the redundant load balancers that are deployed by default as part of the high availability (HA) cluster.
- Log in to the server that will host the Global Controller (CP1).
- From the Bare Metal Orchestrator CLI or web user interface, create the required firewall port exceptions used for successful deployment and operation on Bare Metal Orchestrator. For more information about the required firewall port exceptions, see Port requirements.
- From the AWS management console, create a security group that allows all inbound and outbound traffic. Use this security group when deploying EC2 instances during this procedure. For more information about how to configure a security group, see the official AWS documentation on the AWS website.
- From the AWS management console, deploy three EC2 instances for the three control plane nodes (CP1, CP2, and CP3) in the HA cluster using the security group that you created.
For more information about the CP1, CP2, and CP3 node sizing requirements, see High availability hardware requirements.
For more information about supported operating systems, see High availability hardware and software requirements.
For more information about how to deploy an EC2 instance, see the official AWS documentation on the AWS website.
- From the AWS management console, create required target groups that you will add in the AWS network load balancer. You must create a target group for each port. The ports are 443, 6443, and 5047. CAUTION: The target group name must contain lowercase letters only to prevent Bare Metal Orchestrator installation failure.
For more information about target groups, see the official AWS documentation on the AWS website.
- From the AWS management console, register the three deployed EC2 instances as targets in each of the target groups. For more information about target registration, see the official AWS documentation on the AWS website.
- From the AWS Management Console, create an AWS network load balancer with three listeners for each of the ports: 443, 6443, and 5047. You must associate the correct target groups that you created with the listeners. CAUTION: The target group name must contain lowercase letters only to prevent Bare Metal Orchestrator installation failure.
For more information, see the official AWS documentation on the AWS website.
- From the Bare Metal Orchestrator CLI, perform the HA Bare Metal Orchestrator deployment procedure and make the required changes to the
all.yaml
file as listed below.For the deployment procedure, see Deploy an HA Bare Metal Orchestrator cluster.- Edit the keycloak_access_hostname and the lb_vip_ip values in the inventory/my-cluster/group_vars/all.yml file with the DNS name of the AWS EC2 load balancer. The following is an example of the
all.yml
file configured with an AWS EC2 load balancer:
# the user account using which we would do a passwordless ssh on all the nodes
ansible_user: dell
ssh_key_filename: "id_rsa"
# cloud provider information
cloud_provider: "aws" #can be aws or dc
# external registry url if deployed
external_registry: ""
# NTP Settings
# when true optionally append your external ntp servers
ntp_enabled: true
ntp_servers:
- "{{ hostvars[groups['global_controller'][0]]['ansible_host'] | default(groups['global_controller'][0]) }}"
# Set this to the DNS name of Bare Metal Orchestrator which is also used by Keycloak.
# If the default setting is used, then access BMO from a web browser using "https://bmo-globalcontroller".
# If using windows, add an entry in your C:\windows\system32\drivers\etc\hosts file.
keycloak_access_hostname: "bmo-2-1-lb-24df746ead1f084c.elb.us-east-1.amazonaws.com"
# Backup Service Settings
velero_aws_access_key: "myaccesskey"
velero_bucket: "bmo-backup"
velero_backup_location: "https://localhost:30500" #https://ip:port
velero_ca_path: ""
# Deploy the cluster in HA mode
rke2_ha_mode: yes
# Uncomment values to deploy multi node control-plane after setting rke2_ha_mode: true
ha_worker_ip: "{{ hostvars[groups]['ha'] | default(groups['ha']) }}"
#lb_ip_1: "{{ hostvars[groups['loadbalancer'][0]]['ansible_host'] | default(groups['loadbalancer'][0]) }}"
#lb_ip_2: "{{ hostvars[groups['loadbalancer'][1]]['ansible_host'] | default(groups['loadbalancer'][1]) }}"
# Uncomment and set the hostname of the loadbalancers
#lb_hostname_1: ""
#lb_hostname_2: ""
lb_vip_ip: "bmo-2-1-lb-24df746ead1f084c.elb.us-east-1.amazonaws.com"
#longhorn mount path
storage_mount_path: "/longhorn/"
# Add Secondary IPs for Certificate Generation. Uncomment cp1_secondary_ip for singlenode only. Uncomment all 3 for HA
cp1_secondary_ip: "{{ hostvars[groups['secondary_ip'][0]]['ansible_host'] | default(groups['secondary_ip'][0]) }}"
cp2_secondary_ip: "{{ hostvars[groups['secondary_ip'][1]]['ansible_host'] | default(groups['secondary_ip'][1]) }}"
cp3_secondary_ip: "{{ hostvars[groups['secondary_ip'][2]]['ansible_host'] | default(groups['secondary_ip'][2]) }}"
# Uncomment this for IPv4/IPv6 network CIDRs to be used for pod IPs (default: 10.42.0.0/16)
#cluster_cidr: "172.28.0.0/16"
# Uncomment this for IPv4/IPv6 network CIDRs to be used for service IPs (default: 10.43.0.0/16)
#service_cidr: "172.27.0.0/16"
#longhorn info
enable_longhorn: true
# set the partition name in which the volumes will be created for longhorn
# e.g. /dev/sdb1
storage_volume: "/dev/xvdb1"
# update id incase multiple deployments in same subnet
keepalive_vrrp_id: "151"
# ---------------------------------------------------------- #
# do not change any of these attributes in the section below #
# ---------------------------------------------------------- #
host_base_dir: "/root"
global_controller_ip: "{{ hostvars[groups['global_controller'][0]]['ansible_host'] | default(groups['global_controller'][0]) }}"
worker_ip: "{{ hostvars[groups]['node'] | default(groups['node']) }}"
# min size required of storage volume for longhorn nodes in GB
min_longhorn_size: 200
registry_image: "localregistry.io:5047/registry:2.8.2"
# IAM settings
keycloak_namespace: "iam"
db_namespace: "db"
db_storage: "8Gi"
# Velero settings
velero_namespace: "velero"
velero_image: "{{registry_name}}/mw/velero/velero:v1.11.0"
velero_plugin_image: "{{registry_name}}/mw/velero/velero-plugin-for-aws:v1.7.0"
minio_route: "{{ hostvars[groups['global_controller'][0]]['ansible_host'] | default(groups['global_controller'][0]) }}"
minio_port: "30500"
# version: v2.1.0-dev.96
- Edit the keycloak_access_hostname and the lb_vip_ip values in the inventory/my-cluster/group_vars/all.yml file with the DNS name of the AWS EC2 load balancer.