Home > Storage > PowerScale (Isilon) > Product Documentation > Storage (general) > Dell PowerScale: Considerations and Best Practices for Large Clusters > Layout, protection, and failure domains
OneFS provisioning works on the premise of dividing similar nodes’ drives into sets, or disk pools, with each pool representing a separate failure domain. These are protected by default at +2d:1n (or the ability to withstand two drive or one entire node failure), or often +3d:1n1d (three drive or one node and one drive failure) in larger and denser clusters.
Unlike the PowerScale F910, F900, F710, F600, F210, and F200, where each node is self-contained, PowerScale chassis, such as the H700 and A3000, contain four compute modules (one per node), and five drive containers, or sleds, per node. Each sled is a tray which slides into the front of the chassis and contains between three and six drives, depending on the configuration of a particular chassis.
Multiple groups of different node types, or node pools, can work together in a single, heterogeneous cluster. For example: One node pool of F-series nodes for I/Ops-intensive applications, one node pool of H-series nodes, primarily used for high-concurrent and sequential workloads, and one node pool of A-series nodes, primarily used for nearline and/or deep archive workloads.
This allows a large cluster to present a single storage resource pool consisting of multiple drive media types—SSD, high speed SAS, large capacity SATA—providing a range of different performance, protection, and capacity characteristics. This heterogeneous storage pool in turn can support a diverse range of applications and workload requirements with a single, unified point of management. It also facilitates the mixing of older and newer hardware, allowing for simple investment protection even across product generations, and seamless hardware refreshes.
Each Node Pool only contains disk pools from the same type of storage nodes and a disk pool may belong to exactly one node pool. Any new node added to a cluster is automatically allocated to a node pool and then subdivided into disk pools without any additional configuration steps, inheriting the SmartPools configuration properties of that node pool.