Home > Workload Solutions > Container Platforms > Red Hat OpenShift Container Platform > Guides > Design Guide—Red Hat OpenShift Container Platform 4.12 on AMD-powered Dell Infrastructure > Dell ObjectScale
ObjectScale is software-defined object storage that is based on containerized architecture. It has the following features:
For Red Hat OpenShift Container Platform 4.12, Dell Technologies recommends deploying ObjectScale storage on a standard OpenShift cluster with dedicated control-plane nodes to ensure HA.
ObjectScale uses various Erasure Coding (EC) schemes for data protection. EC is a method of data protection in which data is broken into fragments, expanded, and then encoded with redundant data pieces. The data pieces are stored across different locations or storage media. The EC goal is to enable data that becomes corrupted in the disk storage process to be reconstructed by using information about the data that is stored elsewhere in ObjectScale. EC schemes are often used instead of traditional RAID because they can reduce the time overhead that is required to reconstruct data and, depending on the scheme, provide greater data resiliency.
ObjectScale minimum disk requirements vary based on EC requirements. When an object store is created, the user specifies the total raw capacity and EC scheme. The number and size of storage server (SS) instances in an object store represent the persistent storage capacity that is allocated for raw user data. SS instances attach to PVs on disks using PVCs. ObjectScale writes data for best protection, taking account of the number of volumes on disk, disks per SS, and SS instances across the cluster. For more information, see Dell ObjectScale.
The following table shows the number of nodes that are required for varying levels of fault tolerance:
Erasure coding scheme | Number of nodes | Data availability during component failures |
12+4 | 4 to 5 nodes | One node failure Disk failures from a single node |
12+4 | 6 to 9 nodes | One node failure Disk failures from up to two different nodes and up to a maximum of four disk failures in total |
12+4 | Up to 10 nodes | Two node failures Disk failures from two different nodes One node failure and disk failures from another node |