This Ready Stack uses the Dell EMC Unity XT x80F All-Flash storage platform with vSphere integration. The Ready Stack is connected to a 32 Gb FC storage network. The Unity XT x80F is designed for all-flash storage. With all-inclusive software, Unity XT All-Flash systems deliver consistent performance with low response times and are ideal for mixed virtual-workload requirements. Two Dell EMC Connectrix DS-6620B switches make up the FC fabrics.
The following table shows a comparison of the Unity XT x80F All-Flash storage arrays that this Ready Stack solution supports. For more information, see the Dell EMC Unity XT Storage Series Specification Sheet.
Table 2. Dell EMC Unity XT x80F All-Flash systems
Resources |
Unity XT 380F |
Unity XT 480F |
Unity XT 680F |
Unity XT 880F |
Processor (per storage processor) |
2 x Intel CPUs, 12 cores per array, 1.7 GHz |
2 x dual-socket Intel CPUs, 32 cores per array, 1.8 GHz |
2 x dual-socket Intel CPUs, 48 cores per array, 2.1 GHz |
2 x dual-socket Intel CPUs, 64 cores per array, 2.1 GHz |
Memory (per storage processor) |
128 GB |
192 GB |
384 GB |
768 GB |
Maximum number of drives |
500 |
750 |
1,000 |
1,500 |
Maximum capacity (raw) |
2.4 PB |
4.0 PB |
8.0 PB |
16.0 PB |
Maximum FAST Cache |
Up to 800 GB |
Up to 1.2 TB |
Up to 3.2 TB |
Up to 6.0 TB |
The solution configuration includes two FC fabrics for high availability. For the Unity XT x80F arrays, FC port 0 from each controller connects to the FC fabric switch A, while FC port 1 connects to FC fabric switch B. Unity XT x80F arrays have expansion slots that can provide additional front-end (FC) or back-end (mini-SAS HD) ports. For more information, see the Dell EMC Unity XT Storage Series Specification Sheet.
Each management and rack compute server is configured with an Emulex 35002 dual-port FC 32 Gb PCIe low-profile adapter for connecting to the storage fabrics. Each port connects to a Connectrix switch.
Each Unity x80F storage array is equipped with two back-end buses that use mini-SAS HD connectivity. Connect additional enclosures so that the load is balanced equally between the available buses. The Disk Processor Enclosure is on Bus 0. Therefore, place the first expansion enclosure on Bus 1, the second expansion enclosure on Bus 0. For details about adding enclosures, see the Dell EMC Unity: Best Practices Guide.
For both rack server platforms, each server’s HBA port 1 connects to the Connectrix switch 1, while HBA port 2 connects to the Connectrix switch 2. These ports are then zoned with the Unity array target ports to enable storage access for the hypervisor hosts.
Multiple datastores within the vSphere cluster enable vSphere High Availability (HA) datastore heartbeating, ensuring that partitioning or isolated host networks do not trigger VM movement within the cluster. By default, the vSphere cluster selects at least two (and up to five) datastores for datastore heartbeating.
Block storage presented to vSphere hosts from the Unity XT x80F array has the round-robin Path Selection Policy (PSP) applied by default. While a round-robin PSP is recommended for Unity block storage, the default number of IOPS between switching paths is 1,000. Reducing this value enables more efficient use of all paths.
VMware currently supports a maximum datastore size of 64 TB and 2,048 powered-on VMs per VMFS datastore. However, in most circumstances and environments, the target number of VMs per datastore depends on multiple factors. These factors include workload profile (IOPS, throughput, data locality, read/write percentage), underlying storage and fabric configuration, and recoverability requirements. As a conservative recommendation, 15 to 25 VMs per 1.25 TB to 2.5 TB datastore is typical. You can easily expand LUNs and vSphere datastores to address future growth. Maintaining a smaller number of VMs per datastore greatly reduces the potential for I/O contention, which results in more consistent performance across the environment.
Unity arrays offer thin provisioning as a recommended option for block storage, and they require thin provisioning to enable compression. Using thin provisioning within VMware on virtual disks does not initially result in additional space efficiency when thin provisioning is enabled on the array. However, the ability to reclaim space from within a compatible guest operating system requires that thin provisioning be used on both the storage and the virtual disks.
The sixth generation of Isilon storage arrays consists of eight new platforms: two All-Flash platforms (F800, F810), four Hybrid platforms (H600, H5600, H500, H400), and two Archive platforms (A200, A2000). All Isilon platforms are powered by the Isilon OneFS 9.0.x operating system. The new Isilon platforms integrate easily into an existing Isilon cluster or can be deployed as a new cluster to enable IT modernization. These platforms all use a new, highly dense modular architecture that provides four Isilon nodes within a single Isilon chassis.
The previous generation of Isilon hardware required a minimum of three nodes with a minimum of 6U of rack space to form a cluster. The new generation of Isilon hardware requires a single chassis of four nodes in only 4U of rack space to create a cluster, providing a 75-percent density savings. For back-end internode communication, the new generation of hardware adds support for Ethernet connectivity, in addition to the previously supported InfiniBand connectivity.
The new Isilon arrays provide newly designed drive sleds to contain the physical drives. The drive sleds provide increased availability and redundancy, allowing for faster disk rebuilds during recovery from a hardware failure. In addition, the “node-pair” design provides increased resiliency and availability. Within each chassis, each node in an identical pair of nodes shares a mirrored journal and two power supplies to eliminate single points of failure and increase data availability.
This generation of Isilon hardware also delivers vastly improved serviceability due to its standardized modular design. All new Isilon models include similar components. For example, in the front of the chassis, all nodes across all models have five sleds that house the drives, with identical specific locations at the back. This design consistency means that pulling out a drive or drive sled is the same across all nodes. Streamlining serviceability of hardware improves speed to recovery, reduces errors, and lowers risk.
The following table compares the new Isilon platforms:
Table 3. Sixth-generation Dell EMC Isilon systems
Model |
Capacity per chassis |
Storage per chassis |
F800 |
96–924 TB |
60 SSDs (1.6 TB, 3.2 TB, 3.84 TB, 7.68 TB, or 15.36 TB) |
F810 |
230–924 TB |
60 SSDs (3.84 TB, 7.68 TB, or 15.36 TB) |
H600 |
72–144 TB |
120 SAS drives (600 GB or 1.2 TB) |
H5600 |
800–960 TB |
80 SATA drives (10 TB or 12 TB) |
H400 and H500 |
120–720 TB |
60 SATA drives (2 TB, 4 TB, 8 TB, or 12 TB) |
A200 |
120–720 TB |
60 SATA drives (2 TB, 4 TB, 8 TB, or 12 TB) |
A2000 |
800 TB or 960 TB |
80 SATA drives (10 TB or 12 TB) |
The Isilon H500 array runs on the OneFS 9.0.x operating system. This versatile hybrid platform is designed to provide high throughput and scalability by delivering up to 5 GB/s bandwidth per chassis with a capacity of up to 720 TB per chassis. With 60 SATA drives per chassis, the Isilon H500 offers a choice of 2 TB, 4 TB, 8 TB, or 12 TB capacity. It can also be configured with four to eight 1.6 TB or 3.2 TB SSDs in each chassis for cache, to optimize performance.
The Isilon H500 can support a broad range of enterprise workloads and file use cases. Compared to the Isilon S210 platform, the H500 provides two times more throughput and eight times more rack capacity. Each Isilon H500 has 128 GB of memory per node, a choice of 10 GbE or 40 GbE for front-end networking, and InfiniBand or 40 GbE for back-end connectivity.
Table 4. H500 node attributes and options
Attribute |
Description |
Capacity |
120–720 TB |
Hard drives |
15 per node/60 per chassis (2 TB, 4 TB, 8 TB, or 12 TB SATA drives) |
Number of nodes (per chassis) |
4 |
Cache (per node) solid-state drives (1.6 TB or 3.2 TB) |
1 or 2 |
Self-encrypting drive (SED) option? |
Yes |
OneFS version |
.9.0.x and later |
System memory (per node) |
128 GB |
Front-end networking (per node) |
2 x 10 GbE (SFP+) or 2 x 40 GbE (QSFP+) |
Network interfaces |
Support for IEEE 802.3 standards for 1 GbE, 10 GbE, 25 GbE, and 40 GbE network connectivity |
Drive controller |
SATA-3 6 Gbps, SAS-3 12 Gbps |
CPU type |
Intel Xeon E5-2630 v4 |
Infrastructure (back-end) networking (per node) |
2 InfiniBand connections with quad data rate (QDR link or 2 x 40 GbE (QSFP+) |