A Dell PowerScale cluster is built on a highly redundant and scalable architecture based on the hardware premise of shared nothing. The fundamental building blocks are platform nodes of which there are anywhere from three to two hundred and fifty-two nodes in a cluster. Each of these platform nodes contains CPU, memory, disk, and I/O controllers in an efficient 1U or 4U rack-mountable chassis. Redundant Ethernet or InfiniBand (IB) adapters provide a high-speed back-end cluster interconnect—essentially a distributed system bus—and each node houses mirrored and battery-backed file system journals to protect any uncommitted writes. Except for the LCD control front panel, all node components are standard enterprise commodity hardware.
These platform nodes contain various storage media types and densities, including SAS and SATA hard drives, solid state drives (SSDs), and a configurable quantity of memory. This allows customers to granularly select an appropriate price, performance, and protection point to accommodate the requirements of specific workflows or storage tiers.
Highly available storage client access is provided over multiple 1, 10, 25, or 40 Gb/s Ethernet interface controllers within each node, and across various file and object protocols, including NFS, SMB, S3, and HDFS. OneFS also provides full support for both IPv4 and IPv6 environments across the front-end Ethernet networks.
Heterogeneous clusters can be architected with a wide variety of node styles and capacities, in order to meet the needs of a varied dataset and wide spectrum of workloads. These node styles fall loosely into four main categories or tiers. The following figure illustrates these tiers, and the associated node models: