OpenShift Container Platform 4.2 introduced support for the CSI operator-framework-driven API. This CSI API manages the control plane (that is, it runs on the control-plane nodes) to orchestrate and manage configuration and tear-down of data-path storage operations. Storage driver plug-in support was available in earlier Kubernetes releases, but it required the integration of volume plug-ins into the core Kubernetes codebase. Kubernetes version 1.19 is integrated into OpenShift Container Platform 4.6.
The CSI was introduced to GA in Kubernetes v1.13. CSI replaced the volume plug-in system. Volume plug-ins were built “in-tree,” that is, as part of the Kubernetes source code; therefore, changes or fixes to various volume plug-ins provided by storage vendors had to be made in lockstep with the core Kubernetes release schedule. The CSI specification aims to standardize the exposure of block and file storage systems to workloads running on container orchestration systems such as Kubernetes. Kubernetes can now be readily extended to support any storage solution with CSI drivers that the vendor provides. Vendors can manage the life cycle of their drivers directly, using an Operator, without waiting until the next core Kubernetes release.
Drivers are typically shipped as container images. These images are not platform-aware, and therefore additional components are required to enable interaction between OpenShift Container Platform and the driver image. An external CSI controller running on infrastructure nodes has three containers: attacher, provisioner, and driver container. The attacher and provisioner containers serve as translators; they map OpenShift Container Platform calls to the corresponding calls to the CSI driver. No other communication to the CSI driver is allowed. On each compute node, a CSI Driver Daemon set is created containing the CSI driver, and a CSI Registrar. The Registrar registers the driver with the openshift-node service, which then directly connects to the driver. The following figure shows this architecture:
Figure 9. CSI architecture
Support for snapshots of CSI volumes was added in Kubernetes v1.19 and is available in OpenShift Container Platform 4.6 as a Tech Preview feature. OpenShift provides the CSI Snapshot Controller Operator, which manages snapshot objects. An external snapshot sidecar container must be implemented in the CSI driver to enable snapshot functionality. All Dell Storage CSI drivers support snapshots.
The following table provides an overview of Dell Technologies storage platforms with their corresponding CSI and protocol support. These capabilities reflect what has been implemented in the CSI drivers that are intended for use with OpenShift Container Platform 4.6.
Table 4. Dell Technologies CSI storage products and capabilities
Storage capability |
PowerMax |
PowerFlex operating system |
Unity |
PowerScale |
PowerStore |
Static provisioning |
Yes |
Yes |
Yes |
Yes |
Yes |
Dynamic provisioning |
Yes |
Yes |
Yes |
Yes |
Yes |
Binding |
Yes |
Yes |
Yes |
Yes |
Yes |
Retain Reclaiming |
Yes |
Yes |
Yes |
Yes |
Yes |
Delete Reclaiming |
Yes |
Yes |
Yes |
Yes |
Yes |
Create Snapshot Volume |
No |
Yes |
Yes |
Yes |
Yes |
Create Volume from Snapshot |
No |
Yes |
Yes |
Yes |
Yes |
Delete Snapshot |
No |
Yes |
Yes |
Yes |
Yes |
Access Mode |
ReadWrite |
ReadWrite |
ReadWrite |
ReadWrite |
ReadWrite |
FC |
Yes |
N/a |
Yes |
N/a |
Yes |
iSCSI |
Yes |
N/a |
Yes |
N/a |
Yes |
NFS |
N/a |
N/a |
No |
Yes |
Yes |
Other protocols |
N/a |
ScaleIO protocol |
N/a |
N/a |
N/a |
Red Hat Enterprise Linux node |
Yes |
Yes |
Yes |
Yes |
Yes |
RHCOS node |
Yes |
No |
Yes |
Yes |
Yes |
Advanced storage feature support is being added to the CSI driver reference specifications. New to Kubernetes v1.19 is beta support for snapshots, enabling customers to back up and restore application data.
Dell Technologies CSI drivers for FC and iSCSI arrays format the volumes with either xfs or ext4 before mounting these volumes to the pods.
Among other factors, consider workload performance and volume access requirements: for example, NFS array is a preferred option for workloads that require concurrent access from multiple clients (such as Access Mode ReadWriteMany).
Dell Technologies CSI drivers provide a Red Hat-certified Operator to deploy and manage the life cycle of CSI drivers for OpenShift Container Platform 4.6. Operator deploys and manages the life cycle (installation, upgrade, uninstallation) for all the CSI drivers listed in Table 4, as shown in the following figure:
Figure 10.
Operator-managed drivers
The storage array type dictates specific operator configuration parameters: the API endpoint for the management of the storage platform, protocol, storage pool, and so on.
After the installation is complete, you can access new storage classes directly from the UI and use them as objects with the CLI, as shown in the following figure:
Figure 11. Creating storage classes
You can use the new storage classes in the PV or PVC the same way as the other supported types described in PV types, as shown in the following figure:
Figure 12. Creating PVCs
OpenShift administrators can control the storage consumption with quotas. The LimitRange and ResourceQuota directives offer quota capability. Set the quota capability, at the namespace level to enforce a minimum and maximum request size as along with the number of volumes and total consumption. This setting prevents a pod from bloating all the storage resources and potentially affecting future claims.