Home > Storage > PowerScale (Isilon) > Product Documentation > Storage (general) > Dell PowerScale: Considerations and Best Practices for Large Clusters > Cluster definitions
Depending on who you ask, you will likely receive several definitions of exactly what constitutes a large cluster. Criteria often include:
The following table provides some notable landmarks that are reached as the cluster node count increases.
Description of attribute | |
20 | The largest number of nodes (stripe width) that OneFS can write data and parity blocks across (stripe width). Point at which Gen6 nodes split into two neighborhoods. |
32 | Larger, modular enterprise class switch for Ethernet backend clusters is often deployed (>32 ports). |
40 | Point at which OneFS automatically divides into two PowerScale F-series (and Isilon Gen5 and earlier) neighborhoods, and Gen6 clusters achieve chassis-level redundancy (four neighborhoods). |
48 | Larger, modular enterprise class switch for Infiniband backend clusters is required (>48 ports) |
64 | Historically recommended maximum cluster size |
80 | No InsightIQ support, CELOG alerting challenges, WebUI and CLI list processing and reporting become cumbersome. |
144 | OneFS 8.1.x and earlier maximum supported cluster node count |
252 | OneFS 8.2 and later maximum supported cluster node count |
For the purposes of this paper, we will focus on node count as the cluster size criteria and consider the following definitions.
Definition | PowerScale F-series Description | PowerScale Gen6 Description |
Small cluster | Between 3 and 32 nodes | Between 1 and 8 chassis |
Medium cluster | Between 32 and 48 nodes | Between 9 and 12 chassis |
Large cluster | Between 48 and 144 nodes | Between 13 and 36 chassis |
Extra-large cluster | Between 144 and 252 nodes | Between 20 and 64 chassis |
Prior to OneFS 8.0, the recommendation was for a maximum cluster size of around 64 nodes based on balancing customer experience with the manageability of extra-large clusters, the risk profile associated with the size of the fault domain that represents for their business, and the ease and simplicity of a single cluster. However, since then, OneFS 8 and later releases have seen considerable backend network infrastructure enhancements removing this 64-node max recommendation and providing cluster stability up to the current supported maximum of 252 nodes per cluster in OneFS 8.2 and later.
One of the significant developments in cluster scaling has been the introduction of Ethernet as a cluster’s backend network. However, it is still possible to use Gen6 nodes with an Infiniband backend for compatibility with previous generations of nodes. This allows legacy clusters to be augmented with the new generation of hardware.
Note: When provisioning an all-new cluster, 40 Gb or 100 Gb Ethernet is highly encouraged for the backend interconnect network. Additionally, Ethernet backend is strongly recommended for large clusters, and configurations using Dell switches configured in a leaf-spine topology are supported all the way up to 252 nodes.