Home > Workload Solutions > Container Platforms > Red Hat OpenShift Container Platform > Archive > Implementation Guide—Red Hat OpenShift Container Platform 4.10 on Intel-powered Dell Infrastructure > Red Hat OpenShift Data Foundation storage
Red Hat provides OpenShift Data Foundation as a method to provide persistent storage to applications that are consuming compute node local disks or dynamically provisioned storage through a standard OpenShift Container Platform cluster storage class.
Ensure that:
Follow steps 1 to 4 in Provisioning local storage.
To install the OpenShift Data Foundation operator:
$ oc annotate namespace openshift-storage openshift.io/node-selector=
This command overrides the cluster-wide default node selector for OpenShift Data Foundation.
To create and configure the OpenShift Data Foundation cluster:
Disks on all nodes
Uses the available disks that match the selected filters on all the nodes.
Disks on selected nodes
Uses the available disks that match the selected filters only on the selected nodes.
Parameter | Description |
Volume mode | Block is selected by default. |
Device Type | Select one or more device type from the drop-down menu. |
Disk Size | Set a minimum size of 100 GB for the device and the maximum available size of the device that needs to be included. |
Maximum Disks Limit | The maximum number of PVs that can be created on a node. If this field is left empty, PVs are created for all the available disks on the matching nodes. |
The creation of LocalVolumeSet is confirmed.
After some time, this field is populated with the capacity value based on all the attached disks that are associated with the storage class. The Selected nodes list shows the nodes based on the storage class.
Cluster-wide encryption
Encrypts the entire cluster (block and file).
StorageClass encryption
Creates encrypted persistent volume (block only) using encryption enabled storage class.This encryption type requires an advanced subscription.
Default (SDN) if you are using a single network.
Custom (Multus) if you are using multiple network interfaces.
Note: At the time of publication, Custom is a Tech Preview feature.
To modify any configuration settings, click Back to go back to the previous configuration page.
Click Create StorageSystem.
Verify that StorageCluster has a status of Ready and has a green tick mark next to it.
The local storage operator discovers all the disks and creates PVs on the cluster.
CephFS is a filesystem based on Ceph Storage Cluster. The storage class for CephFS can be used for provisioning a shared filesystem.
To create a PVC using ocs-storagecluster-cephfs:
[core@csah-pri ~]$ oc create -f <YAML file>
[core@csah-pri ~]$ oc get pvc -n odf-test
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
odf-cephfs-daemonset-pvc Bound pvc-44d3a805-0e6b-48b1-a8d6-0ab6c56e1975 50Gi RWX ocs-storagecluster-cephfs 29s
[core@csah-pri ~]$ oc create -f <YAML file>
[core@csah-pri ~]$ oc get pods -n odf-test
Ceph RBD is Ceph’s block storage component that distributes data and workload across the Ceph cluster. Ceph rbd can be used to provision block storage.
To create a PVC using ocs-storagecluster-ceph-rbd:
[core@csah-pri ~]$ oc create -f <YAML file>
[core@csah-pri ocs]$ oc get pvc ocsrbdpvc -n ocs
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
Odf-rbd-pvc Bound pvc-7742925d-8ecb-4d49-8b4b-00313d8d7c85 100Gi RWO ocs-storagecluster-ceph-rbd 27s
[core@csah-pri ~]$ oc create -f <YAML file>
[core@csah-pri ~]$ oc get pods -n odf-test