This reference architecture uses five node types: bastion, master, infrastructure, application, and storage.
To achieve a low-latency link between etcd and Red Hat OpenShift master nodes, you can install an etcd key-value store on the master nodes. Dell EMC recommends that you run both Red Hat OpenShift masters and etcd in highly available environments. Do this by running at least three Red Hat OpenShift master nodes in conjunction with an external active-passive load balancer and etcd clustering functions.
The web console is started as part of the master node as a OpenShift binary with static content required for running the web console. The OpenShift Container Platform web console is a UI managed from a web browser. The web console provides a set of user options such as visualizing, browsing, and managing the contents of projects.
For more information, see the OpenShift Container Registry.
This reference architecture uses one storage class consisting of NVMe drives only. Although the use of mixed drive types is possible, do not mix drive types in an OpenShift Container Storage cluster. In other words, each storage node must have only one drive type and a sufficient number of drives to create a full storage cluster with the same drive type. Three is the minimum supported number of drives but we recommend using more than four. While you can have one drive in a storage node, one drive is not suitable with bare-metal servers. Note that the recommendation for four drives relates to the nodes, not drives specifically.
You can disable workload scheduling capabilities if storage performance is expected to be critical. See the ‘Managing Nodes’ chapter in the OpenShift Container Platform 3.11 documentation.
The following figure shows the Openshift Container Platform node roles.
Red Hat OpenShift Container Storage can be configured to provide persistent storage and dynamic provisioning for Openshift Container Platform. Gluster storage can be containerized within Openshift Container Platform (converged mode) or non-containerized on its own nodes (independent mode).
With converged mode, Red Hat Gluster Storage runs containerized directly on Openshift Container Platform nodes. This mode allows for compute and storage instances to be scheduled and run from the same set of hardware. Converged mode is available starting with the Red Hat Gluster Storage 3.1 update 3. For more information, see Red Hat OpenShift Container Storage for OpenShift Container Platform.
The following figure shows the converged mode of cluster storage:
Figure 4. Cluster storage - converged mode
GlusterFS volumes present a POSIX-compliant file system. These volumes consist of one or more bricks across one or more nodes in their cluster. A brick is a directory on a given storage node, typically the mount point for a block storage device. GlusterFS handles distribution and replication of files across a given volume’s bricks for that volume’s configuration.
Dell EMC recommends using heketi for most common volume management operations such as create, delete, and resize. Openshift Container Platform expects heketi to be present when using the GlusterFS provisioner. By default, heketi creates volumes that are three-way replicas; that is, volumes where each file has three copies across three different nodes. Therefore it is required that any Red Hat Gluster storage clusters that will be used by heketi have at least three nodes available.
GlusterFS volumes can be provisioned either statically or dynamically. Static provisioning is available with all configurations. Only converged mode and independent mode support dynamic provisioning.
The Gluster-block volumes are volumes that you can mount over iSCSI. You can create a file on an existing GlusterFS volume and then present that file as a block device over an iSCSI target. Such GlusterFS volumes are called block-hosting volumes.
Because Gluster-block volumes are consumed as iSCSI targets, these volumes can only be mounted by one node/client at a time. This limitation is in contrast to GlusterFS volumes, which can be mounted by multiple nodes/clients. As files on the backend, Gluster-block volumes allow for operations that are typically costly on GlusterFS volumes (such as metadata lookups) to be converted to operations that are typically much faster on GlusterFS volumes, such as reads and writes. This leads to potentially substantial performance improvements for certain workloads.
Although GlusterFS supports other modes of presentation for the storage it can manage, Red Hat at this time recommends using Gluster-block volumes only for OpenShift Logging and OpenShift Metrics storage. While application workloads can use this integrated storage starting from Red Hat OpenShift Container Platform v3.11, this use has not been validated in production at the time of writing. As a best practice, we recommend using dedicated enterprise-grade storage technologies for application use and locating this storage outside of the OpenShift Container Ecosystem cluster.
For more information, see Complete Example Using GlusterFS.