Home > Integrated Products > VxBlock 1000 and 3-Tier Platform Reference Architectures > Guides > 3-Tier Platform Design Guide > Legacy PowerMax 2000 and 8000 models
When introduced, the PowerMax 2000 and 8000 arrays raised the bar again for enterprise storage, delivering unmatched levels of performance and consolidation for high-value, high-demand workloads. These arrays support 32Gb/s NVMe/FC to deliver on the promise of end-to-end NVMe, along with Storage Class Memory Drives (SCMs) used as persistent storage.
The PowerMax 2000 and 8000 arrays provide all the features and proven data services demanded of an enterprise active-active controller array, including security, protection, availability, scalability, and massive consolidation, now delivered at latencies measured in microseconds, not milliseconds.
The Next Generation PowerMax storage platform is designed to offer industry-leading cyber resiliency, security, and intelligent automation, while providing a balance of high performance with remarkable efficiency. The Next Generation PowerMax family is comprised of two storage systems:
Both the PowerMax 2500 and 8500 models make use of the industry’s richest data services and excel at workload consolidation because both systems provide storage for block, file, and mainframe workloads.
Based on the powerful Dynamic Fabric architecture and Flexible RAID, the next generation PowerMax systems offer a powerful yet flexible design to independently grow nodes and storage capacity in increments of a single drive. The PowerMax 2500 and 8500 arrays use Intel® Xeon® Scalable processors and today’s most advanced storage technologies, including end-to-end NVMe, InfiniBand 100Gb/s, dual-ported NVMe flash drives, NVMe/TCP, and NVMe/FC connectivity. Both models employ hardware-based data reduction and come with a 5:1 data reduction guarantee for Open Systems and 3:1 for Mainframe. Each PowerMax model is designed for 6 nines of availability and ships with new intelligent PDUs that provide real-time power consumption monitoring and alerting.
The following table provides a high-level comparison between the PowerMax legacy and Next Generation systems:
Table 44. Comparison of PowerMax Legacy and Next Generation systems
Feature | PowerMax 2000 | PowerMax 2500 | PowerMax 8000 | PowerMax 8500 |
Active/active, scale out, scale up architecture | Yes | Yes | Yes | Yes |
Disaggregated storage architecture | No | Yes | No | Yes |
Zero trust architecture | No | Yes | No | Yes |
Self-encrypting drives | No | Yes | No | Yes |
Single drive upgrades | No | Yes | No | Yes |
NVMe/TCP support | No | Yes | No | Yes |
64-bit file support | No | Yes | No | Yes |
Mainframe workload support | No | Yes | Yes | Yes |
Max effective capacity per array | 1.2 PBe | 8 PBe | 4.5 PBe | 18 PBe |
Data Reduction Guarantees | 3.5:1 Open Systems | 5:1 Open Systems, 3:1 Mainframe | 3.5:1 Open Systems, No mainframe data reduction | 5:1 Open Systems, 3:1 Mainframe |
For more information about the PowerMax family of products, see:
Dell Unity XT storage
Dell Unity XT systems provide a powerful storage system with a cost-efficient and space-efficient profile. Some of Dell Unity’s highlight features include:
For more information about Unity XT, see the Unity XT information page or the Unity XT Info Hub.
In a constantly changing world of increasing complexity and scale, the need for an easy-to-use, intelligent storage system continues to grow. Organizations that use new applications and solutions require dependable storage and often face the challenge of doing more with less. PowerStore addresses this challenge by packaging a powerful storage system into a cost-efficient and space-efficient profile. Key PowerStore features and benefits include:
For more information about PowerStore, see the Dell PowerStore information page or the PowerStore Info Hub.
The PowerScale family includes PowerScale storage arrays. The scale-out NAS storage solution delivers increased performance for file-based data applications and workflows from a single file-system architecture. The PowerScale all-flash storage arrays can exist in the same cluster with Isilon Gen 6 nodes to drive your traditional and modern applications.
Note: PowerScale storage arrays cannot exist in the same cluster with Isilon Gen 5 nodes.
PowerScale provides scale-out capacity for use as NFS and SMB CIFS shares in the VMware vSphere VMs. PowerScale is available in the following configurations:
The following table shows the hardware components that are used with each configuration:
Table 45. PowerScale hardware components based on configuration
PowerScale model | Node | Processor | Memory |
PowerScale All-Flash | F200 | Single Socket CPU | 48 GB or 96 GB |
PowerScale All-Flash | F600 | Dual Socket Intel Cascade Lake 4210 2.2 GHz | 128 GB, 192 GB, or 384 GB |
F900 | Dual Socket Intel Cascade Lake 6240R 2.4 GHz 24C | 736 GB | |
PowerScale Hybrid | H700 | Intel 6208U 16-Core, 2.9 GHz | 192 GB |
H7000 | Intel 6208U 16-Core, 2.9 GHz | 384 GB | |
PowerScale Archive | A300 | Intel 3024 6-Core, 1.9 GHz | 96 GB |
A3000 | Intel 3024 6-Core, 1.9 GHz | 96 GB |
The Cisco Nexus 9336C-FX2 is the only supported switch for PowerScale node front-end connectivity.
The following table shows the PowerScale back-end switches that support Isilon Gen 6 nodes and PowerScale Gen 6.5 nodes:
Table 46. PowerScale back-end switch support for PowerScale Gen 6.5 nodes
PowerScale back-end switch | PowerScale Gen 6.5 nodes |
Dell S4112 10 GbE 24P | A300, A3000, H700, H7000, F200 |
Dell S4148 10 GbE 48P | A300, A3000, H700, H7000, F200 |
Dell S5232 40 GbE or 100 GbE 32P | A300, A3000, H700, H7000, F200, F600, F900 |
Dell Z9264 40 GbE or 100 GbE 64P | A300, A3000, H700, H7000, F200, F600, F900 |
Arista 7304 10 GbE 8U 96P * with 40 Gb LC and 32P | A300, A3000, H700, H7000, F200, F600, F900 |
Arista 7308 40 GbE 13U 64P ** | A300, A3000, H700, H7000, F200, F600, F900 |
Celestica 10 GbE 1U 24P (Legacy) | A300, A3000, H700, H7000, F200 |
Celestica D2060 10 GbE 1U 48P (Legacy) | A300, A3000, H700, H7000, F200 |
Celestica D4040 40 GbE 1U 32P (Legacy) | A300, A3000, H700, H7000, F200, F600, F900 |
*Arista 7304 ships with two line cards, each with 48 10 GbE ports. You can add:
**Arista 7308 ships with two line cards, each with 32 40 GbE ports. You can add:
The PowerScale back-end Ethernet switches provide:
For leaf and spine implementations:
PowerScale All-Flash, hybrid, and archive models are housed in a four-node chassis. Each chassis has four compute slots, with each containing:
Note: 1 GbE connections are not used.
Note:
Do not use ports 1 to 6 and 33 to 36 for 40 GbE node connectivity because of port limitations in Cisco Nexus OS 9.x.
Use the last four ports (33-36) for vPC/Uplinks. You can scale up to 26 nodes only per switch pair.
For more information, see Guidelines and Limitations for Layer 2 Interfaces.
PowerScale All-Flash F200 model is a 1U node with the following components:
The following table shows the the PowerScale F200 array hardware and software specifications:
Table 47. PowerScale F200 specifications
Component | F200 | |
Processors per node | Single-socket CPU | |
Memory per node (GB) | 48 GB 96 GB | |
Chassis capacity | 960 GB SSD: 3.84 TB | |
1.92 TB SSD: 7.68 TB | ||
3.84 TB SSD: 15.36 TB | ||
7.68 TB SSD: 30.72 TB | ||
Front-end networking | 2 x 10 GbE or 2 x 25 GbE | |
Back-end networking | 2 x 10 GbE or 2 x 25 GbE |
PowerScale All-Flash F600 model is a 1U node with the following components:
The following table shows the PowerScale F600 model hardware and software specifications:
Table 48. PowerScale F600 specifications
Component | F600 |
Processors per node | Dual-socket Intel Cascade Lake 4210 2.2 GHz |
Memory per node (GB) | 128 GB (8 * 16 GB Dual Rank DDR4 RDIMMs) 192 GB (12 * 16 GB Dual Rank DDR4 RDIMMs) 384 GB (12 * 32 GB Dual Rank DDR4 RDIMMs) |
Chassis capacity | 15.36 TB, SSD: 1.92 TB |
30.72 TB SSD: 3.84 TB | |
61.44 TB SSD: 7.68 TB | |
122.88 TB SSD: 15.36 TB | |
Front-end networking | 2 x 10 GbE or 2 x 25 GbE 2 x 40 GbE or 2 x 100 GbE |
Back-end networking | 2 x 40 GbE or 2 x 100 GbE |
The PowerScale All-Flash F900 is a 2U node containing the following components:
The following table shows the hardware and software specifications for the PowerScale F900 array:
Table 49. PowerScale F900 specifications
Component | F900 |
Processors per node | Dual-socket Intel Cascade Lake 6240R (2.4 GHz, 24C) |
Memory per node | 736 GB (23 x 32 GB Dual Rank DDR4 RDIMMs) |
Chassis capacity | 46 TB SSD: 1.92 TB 92 TB SSD:3.84 TB 184.3 TB SSD: 3.847.68 TB 368.6 TB SSD: 7.6815.36 TB |
Front-end networking | 2 x 10 GbE or 2 x 25 GbE 2 x 40 GbE or 2 x 100 GbE |
Back-end networking | 2 x 40 GbE or 2 x 100 GbE |
PowerScale Gen 6 hybrid and archive models are contained in a four-node chassis.
The PowerScale Hybrid H700 node has the following components:
The following table shows the hardware and software specifications for the PowerScale H700 array:
Table 50. PowerScale H700 specifications
Component | H700 |
Operating system | PowerScale OneFS 9.2.1.4 or later |
Chassis nodes | Infinity 4U/4 nodes per chassis |
Chassis depth | Standard – 15 drives per node |
Processor | Intel Xeon Gold 6208U (16 C, 2.9 GHz) |
Memory (Fixed) | 192 Gb DDR4 |
Storage drive options | 15 x 3.5-in. HD capacity options: 2 TB (30 TB/node) 4 TB (60 TB/node) 8 TB (120 TB/node) 12 TB (180 TB/node) 16 TB (240 TB/node) |
SED/FIPS support | Yes (SAS Drives) |
Cache SSD | 1 or 2 x .8/1.6/3.2 TB |
Inline data reduction | Yes |
Front-end connectivity | 2 x 10 GbE or 2 x 25 GbE 2 x 40 GbE or 2 x 100 GbE |
Back-end connectivity | 2 x 10 GbE or 2 x 25 GbE 2 x 40 GbE or 2 x 100 GbE |
The PowerScale Hybrid H7000 node has the following components:
The following table shows the hardware and software specifications for the PowerScale H7000 array:
Table 51. PowerScale H7000 specifications
Component | H7000 |
Operating system | PowerScale OneFS 9.2.1.4 or later |
Chassis: nodes | Infinity 4U/4 nodes per chassis |
Chassis: depth | Deep – 20 drives per node |
Processor | Intel Xeon Gold 6208U (16 C, 2.9 GHz) |
Memory (fixed) | 384 Gb DDR4 |
Storage/drive options | 20 x 3.5-in. HD capacity options: 12 TB (240 TB/node) 16 TB (320 TB/node) |
SED/FIPS support | Yes (SAS Drives) |
Cache SSD | 2 x 3.2 TB |
Inline data reduction | Yes |
Front-end connectivity | 2 x 10 GbE or 2 x 25 GbE 2 x 40 GbE or 2 x 100 GbE |
Back-end connectivity | 2 x 10 GbE or 2 x 25 GbE 2 x 40 GbE or 2 x 100 GbE |
The PowerScale Archive A300 node has the following components:
The following table shows the PowerScale A300 hardware and software specifications:
Table 52. PowerScale A300 specifications
Component | A300 |
Operating system | PowerScale OneFS 9.2.1.4 or later |
Chassis/Nodes | Infinity 4U/4 nodes per chassis |
Chassis depth | Standard – 15 drives per node |
Processor | Intel Xeon Bronze 3204 (6 C, 1.9 GHz) |
Memory (fixed) | 96 Gb DDR4 |
Storage/drive options | 15 x 3.5-in. HD capacity options:
|
SED/FIPS support | Yes (SAS drives) |
Cache SSD | 1 or 2 x .8/1.6/3.2 TB |
(1 x 0.8 TB for L3 cache version) | |
Inline data reduction | Yes |
Front-end connectivity | 2 x 10 GbE or 2 x 25 GbE 2 x 40 GbE or 2 x 100 GbE |
Back-end connectivity | 2 x 10 GbE or 2 x 25 GbE 2 x 40 GbE or 2 x 100 GbE |
The PowerScale Archive A3000 node has the following components:
The following table shows the PowerScale A3000 hardware and software specifications:
Table 52. PowerScale A3000 specifications
Component | A3000 |
Operating system | PowerScale OneFS 9.2.1.4 or later |
Chassis/nodes | Infinity 4U/4 nodes per chassis |
Chassis depth | Deep¾20 drives per node |
Processor | Intel Xeon Bronze 3204 (6 C, 1.9 GHz) |
Memory (fixed) | 96 Gb DDR4 |
Storage/drive options | 20 x 3.5-in. HD capacity options: 12 TB (240 TB/node) 16 TB (320 TB/node) |
SED/FIPS support | Yes (SAS drives) |
Cache SSD | 2 x 3.2 TB (1 x 8 TB for L3 cache version) |
Inline data reduction | Yes |
Front-end connectivity | 2 x 10 GbE or 2 x 25 GbE 2 x 40 GbE or 2 x 100 GbE |
Back-end connectivity | 2 x 10 GbE or 2 x 25 GbE 2 x 40 GbE or 2 x 100 GbE |
PowerScale uses uplinks and peer-links to connect the ToR Cisco Nexus 9000 series switches to the 3-Tier Platform.
The PowerScale network topology uses the following port channels:
Adhere to the following restrictions:
Create a port channel for the nodes starting at PC or vPC 1001 to directly connect the Isilon nodes to the 3-Tier Platform ToR switches.
The following table shows the vPC maximum limit for Cisco Nexus 9000 Series NX-OS release 9.3 (1) and later:
Table 53. vPC limits on Cisco Nexus switch
Cisco switch | vPC limit |
Cisco Nexus 9336C-FX2 switch | 80 |
Cisco Nexus 93180LC-EX switch | 48 |
With PowerScale OneFS 9.x, the leaf-spine back-end or flat back-end topology supports a maximum of 252 nodes on PowerScale clusters.
PowerScale leaf-and-spine architecture is based on the maximum internal bandwidth and 32-port count of Dell Z9100 and Dell S5232 switches or the 64-port count of Dell Z9264 switches.
A leaf-and-spine architecture provides the following benefits:
PowerScale supports only Ethernet as the back-end network.
The following table shows the minimum OneFS software version that is required on PowerScale nodes:
Table 53. PowerScale OneFS version requirements
PowerScale node | Minimum OneFS software version |
F200 | 9.1.0.4 |
F600 | 9.1.0.4 |
F900 | 9.2.0.0 |
A300 or A3000 H700 or H7000 | 9.2.1.4 |
Back-end leaf-and-spine network deployments can have a minimum of one spine switch and two leaf switches. For small-to-medium clusters, the back-end network includes a pair of redundant ToR switches. Dell S5232 and Dell Z9264 Ethernet switches are supported in the leaf-and-spine architecture. The same switch is used for both the leaf and the spine.
The back-end architecture contains a leaf-and-spine layer. The nodes connect to leaf switches in the leaf layer. The aggregation and core network layers are condensed into a single spine layer.
The leaf-and-spine architecture requires that:
The following guidelines apply to the back-end switch configuration for PowerScale clusters:
Scale planning prevents re-cabling of the backend network. Scale planning makes it easier to upgrade by installing the projected number of spine switches and scaling the cluster by adding leaf switches. The leaf-spine back-end network supports a maximum of 252 nodes with 10 GbE, 25 GbE, or 40 GbE back-end connectivity and a maximum of 128 nodes using 100 GbE front-end connectivity.
The following table shows the switch requirements for the interfaces connecting to the A side network as the cluster scales. The same number of additional switches are required for the B side network. The A and B networks provide redundant paths.
Table 54. S5232 switch as leaf and spine:
Maximum nodes | Leaf switches | Spine switches | Leaf uplinks to each spine |
For all 40 GbE ports | |||
44 | 2 | 1 | 9 |
66 | 3 | 1 | 9 |
88 | 4 | 2 | 5 |
110 | 5 | 2 | 5 |
132 | 6 | 2 | 5 |
154 | 7 | 3 | 3 |
176 | 8 | 3 | 3 |
198 | 9 | 3 | 3 |
220 | 10 | 5 | 2 |
242 | 11 | 5 | 2 |
252 | 12 | 5 | 2 |
For all 100 GbE ports | |||
32 | 2 | 1 | 16 |
64 | 4 | 2 | 8 |
112 | 7 | 4 | 4 |
128 | 8 | 4 | 4 |
With a 22-downlink 40G connection, you can have up to 88 10 GbE nodes using four 10 GbE breakout cables.
With a 16-downlink 100G connection, you can have up to 64 25 GbE nodes using four 25 GbE break out cables.
Table 55. Z9264 switch as leaf and spine:
Maximum nodes | Leaf switches | Spine switches | Cables between each pair |
| For all 40G ports | ||
88 | 2 | 1 | 20 |
176 | 4 | 2 | 10 |
252 | 6 | 3 | 7 |
| For all 100G ports | ||
64 | 2 | 1 | 32 |
128 | 4 | 2 | 16 |
The uplink bandwidth must be equal to or more than the total bandwidth of all the nodes that are connected to the leaf switch. For example, each switch has nine downlink connections that at 40 Gbps require 360 Gbps of bandwidth. At least four 100 Gbps uplink connections to the spine layer should be made from that leaf.
The following table shows the recommended uplink (leaf to spine) and downlink (back-end node connectivity) port reservations on the Z9264 and S5232 switches:
Table 56. Recommended reserved ports for Z9264 and S5232 switches
Switch | Reserve uplink ports | Reserve downlink ports |
Z9100 switch with all 40 Gb ports | 1 to 10 | 11 to 32 |
Z9100 switch with all 100 Gb ports | 1 to 16 | 17 to 32 |
Z9264 switch with all 40 Gb ports | 1 to 20 | 21 to 64 |
Z9264 switch with all 100 Gb ports | 1 to 32 | 33 to 64 |
S5232 switch with all 40 Gb ports | 23 to 32* | 1 to 22 |
S5232 switch with all 100 Gb ports | 17 to 32* | 1 to 16 |
Note: The Dell S5232 switch port 32 does not support breakout connections. For more information, see the Dell PowerScale: Leaf-Spine Network Best Practices Guide.
OneFS 8.2 and later uses SmartConnect with multiple SmartConnect Service IPs (SSIP) per subnet.
Dell Technologies recommends that:
PowerScale contains the OneFS operating system to provide encryption, file storage, and replication features.
The OneFS operating system is available as a cluster of PowerScale OneFS nodes that contain only self-encrypting drives (SEDs). The system requirements and management of data-at-rest on self-encrypting nodes are identical to the nodes without self-encrypting drives. Clusters of mixed node types are not supported.
Self-encrypting drives store data on a PowerScale cluster that is designed for D@RE. D@RE on self-encrypted drives occurs when data that is stored on a device is encrypted to prevent unauthorized data access. All data that is written to the storage device is stored and encrypted. All data that is read from the storage device is decrypted when it is read.
The OneFS operating system combines file systems, volume managers, and data protection into a single intelligent distributed file system that runs on a PowerScale storage cluster.
The 3-Tier Platform configures the two front-end interfaces of each node in an LACP port channel. The front-end interfaces use SmartConnect to load balance share traffic across the nodes in the cluster depending on the configuration.
The OneFS operating system leverages the SyncIQ licensed feature for replication. SyncIQ is an application that enables you to manage and automate data replication between two PowerScale clusters. SyncIQ delivers a unique, highly parallel replication performance that scales with the dataset to provide a solid foundation for disaster recovery. SyncIQ can send and receive data on every node in the PowerScale cluster, which means that replication performance is increased as your data grows.
InsightIQ provides performance monitoring and reporting tools to help you maximize the performance of a PowerScale scale-out NAS platform. With InsightIQ, you can identify performance bottlenecks in workflows and optimize the amount of high-performance storage that is required in an environment. InsightIQ provides advanced analytics to optimize applications, correlate workflow and network events, and monitor storage requirements.
PowerScale OneFS is available in a perpetual and a subscription model with various bundles. The following table shows the available models:
Table 57. PowerScale OneFS subscription details
Subscription model | Type | Software |
Perpetual | Basic bundle | SmartConnect, SnapshotIQ |
Enterprise Bundle | SmartConnect, SnapshotIQ, SmartQuotas | |
Enterprise Advanced Bundle | SmartConnect, SnapshotIQ, SmartQuotas, SyncIQ, SmartPools | |
Subscription | OneFS Essentials Subscription | SmartConnect, SnapshotIQ, SmartQuotas |
OneFS Advanced Subscription | All software except CloudPools | |
OneFS CloudPools third-party Subscription | CloudPools for third party |
The following table shows the PowerScale license features:
Table 58. PowerScale license features
Feature | Details |
CloudPools | Cloud tiering |
Security hardening | Cluster security, Security Technical Implementation Guide (STIG), and so on |
HDFS | Hadoop file system protocol |
Isilon Swift | OneFS Swift object API |
SmartConnect Advanced | Cluster connection load balancing |
SmartDedupe | Data deduplication |
SmartLock | WORM data immutability |
SmartPools | Data tiering |
SmartQuotas | Quota management |
SnapshotIQ | File system snapshots |
SyncIQ | Cluster asynchronous replication |
InsightIQ | Performance monitoring and reporting |
The number of PowerScale nodes that are supported depends on the ports that are available in the ToR switch.
All the node front-end ports are placed in LACP port channels. The front-end ports for each of the nodes are connected to a pair of redundant network switches.
Note: Do not use ports 1 to 6 or ports 33 to 36 for 40 GbE node connectivity. This restriction is due to port limitations in Cisco Nexus OS 9.x. Use the last four ports (33 to 36) for virtual private cloud (vPC) or uplinks. You can scale up to 26 nodes only per switch pair. For more information, see Guidelines and Limitations for Layer 2 Interfaces.
The following table shows the number of nodes that are supported for OneFS 8.2.1 and later:
Table 59. OneFS 8.2.1-supported nodes
Node | Node scalability | Capacity scale per chassis |
A300 | 4 to 252, 10 GbE 4 to 252, 25 GbE 4 to 252, 40 GbE 4 to 128, 100 GbE | 120 TB, 240 TB, 480 TB, 720 TB, 960 TB |
A3000 | 4 to 252, 10 GbE 4 to 252, 25 GbE 4 to 252, 40 GbE 4 to 128 100 GbE | 960 TB, 1.28 PB |
F200 | 3 to 252 10 GbE 3 to 252 25 GbE | 3.84 TB, 7.68 TB, 15.36 TB, 30.72 TB |
F600 | 3 to 252 10 GbE 3 to 252 25 GbE 3 to 252 40 GbE 3 to 128 100 GbE | 15 TB, 30 TB, 60 TB, 122 TB |
F900 | 3 to 252 10 GbE 3 to 252 25 GbE 3 to 252 40 GbE 3 to 128 100 GbE | 46 TB, 92 TB, 184 TB, 368 TB |
H700 | 4 to 252, 10 GbE 4 to 252, 25 GbE 4 to 252, 40 GbE 4 to 128 100 GbE | 120 TB, 240 TB, 480 TB, 720 TB, 960 TB |
H7000 | 4 to 252, 10 GbE 4 to 252, 25 GbE 4 to 252, 40 GbE 4 to 128 100 GbE | 960 TB, 1.28 PB |