This section describes how Amazon EKS Anywhere Bare Metal is deployed on a Dell PowerFlex bare metal cluster. You can also deploy this solution on other Dell PowerFlex family products.
The following figure shows the bare metal architecture of the Amazon EKS Anywhere on PowerFlex.
Note: The solution that is described in this section was validated in the Dell engineering lab with the hardware specifications that are provided in Configuration details.
Each storage node contains five 1.4TB SAS SSD drives and eight 25GbE network links. For the validation, as shown here, four PowerFlex storage nodes were used to provide full redundancy.
For the compute nodes, we used two 2U nodes. These two hosts have the PowerFlex Container Storage Interface (CSI) Plug-in that is installed to provide access to the PowerFlex storage. This plug-in is deployed as part of the PXE boot process along with the Ubuntu operating system.
Note: There is no hypervisor installed and the storage is provided by the four storage nodes. This creates a two-layer architecture which, as you can see, creates separate storage and compute layers for the environment.
Using a two-layer architecture makes it possible to scale resources independently as needed in the environment, which allows for optimal resource utilization. Thus, if more storage is needed, it can be scaled without increasing the compute capacity. Similarly, if the environment needs additional compute capacity, this capacity can easily be added.
Outside of the Amazon EKS Anywhere instance, there are two physical nodes. Both are central to building the control plane and worker nodes. The administration node is where the user can control the Amazon EKS Anywhere instance and serves as a portal to upload inventory information to the Tinkerbell node. This node serves as the infrastructure services stack and is key in the provisioning and PXE booting of the bare metal workloads.
When a configuration file with the data center hardware has been uploaded, Tinkerbell generates a cluster configuration file. The hardware configuration and cluster configuration files, both in YAML format, are processed by Tinkerbell to create a boot strap style cluster on the admin host to install the Cluster-API (CAPI) and the Cluster API Provider Tinkerbell (CAPT).
When the base control environment is operational, CAPI creates cluster node resources, and CAPT maps and powers on the corresponding bare metal servers. The bare metal servers PXE boot from the Tinkerbell node. And then the bare metal servers join the Kubernetes cluster. Cluster management resources are transferred from the bootstrap cluster to the target Amazon EKS Anywhere workload cluster.
The local bootstrap kind cluster is then deleted from the admin machine, creating both the control-plane and worker nodes. With the cluster established, SDC drivers are installed on the worker nodes along with the Dell CSI plug-in for PowerFlex. At this point, workloads can be deployed to the worker nodes as needed.
With bare metal deployments, it is possible to scale environments independently based on resource demands. PowerFlex software-defined infrastructure not only supports a malleable environment like this, but also allows the mixing of environments to include hyper-converged components. This means that an infrastructure can be tailored to the needs of the environment instead of the environment being forced to conform to the infrastructure. It also creates an environment that unifies the competing demands of data sovereignty and cloud IT, by enabling data to maintain appropriate residence while unifying the control plane.