Isilon is a scale-out NAS storage solution that delivers increased performance for file-based data applications and workflows from a single file-system architecture.
Isilon provides scale-out capacity for use as NFS and SMB CIFS shares within the VMware vSphere VMs.
Isilon is available in the following configurations:
The following table shows the hardware components with each configuration:
Isilon model | Node | Processor | Memory |
Isilon All-Flash | F800 F810 | Intel E5-2697Av4 16-core, 2.6 GHz | 256 GB |
Isilon Hybrid Scale-out NAS | H400 | Intel D-1527 4-core, 2.2 GHz | 64 GB |
H500 | Intel E5-2630v4 10-core, 2.2 GHz | 128 GB | |
H5600 | Intel E5-2680v4 14-core, 2.2 GHz | 256 GB | |
H600 | Intel E5-2680v4 14-core, 2.2 GHz | 256 GB | |
Isilon Archive Scale-out NAS | A200 | Intel D-1508 2-core, 2.2 GHz | 16 GB |
A2000 | Intel D-1508 2-core, 2.2 GHz | 16 GB |
The following Cisco Nexus switches provide front-end connectivity:
The Isilon back-end Ethernet switches provide:
Note: Leaf modules are only applicable in chassis types that are 10 GbE over 48 nodes and 40 GbE over 32 nodes.
Isilon All-Flash, hybrid, and archive models are contained within a four-node chassis.
There are four compute slots per chassis each contain:
Note: 1 GbE connections are not used.
The following table provides hardware and software specifications for each Isilon model:
Component | F800 | F810 | H600 | H5600 | H500 | H400 | A200 | A2000 |
Processor per node | 16 core 2.6 GHz | 16 core 2.6 GHz | 14 core 2.4 GHz | 14 core 2.4 GHz | 10 core 2.2 GHz | 4 core 2.2 GHz | 2 core 2.2 GHz | 2 core 2.2 GHz |
Memory per node (GB) | 256 | 256 | 256 | 256 | 128 | 64 | 16 | 16 |
| 1.6 TB SSD: 96 TB | 3.84 TB SSD: 230 TB | 600 GB SAS: 72 TB | 10 TB Hard drive: 800 TB | 2 TB Hard drive: 120 TB | 2 TB Hard drive: 120 TB | 2 TB Hard drive: 120 TB | 10 TB Hard drive: 800 TB |
Chassis capacity | 3.2 TB SSD: 192 TB | 7.68 TB SSD: 460 TB | 1.2 TB SAS: 144 TB | 12 TB Hard drive: 960 TB | 4 TB Hard drive: 240 TB | 4 TB Hard drive: 240 TB | 4 TB Hard drive: 240 TB | |
| 15.4 TB SSD: 924 TB | 15.36 TB SSD: 924 TB | NA | NA | 8 TB Hard drive: 480 TB | 8 TB Hard drive: 480 TB | 8 TB Hard drive: 480 TB | |
Front-end networking | 2 x 10 GbE or 40 GbE | 2 x 10 GbE or 40 GbE | 2 x 10 GbE or 40 GbE | 2 x 10 GbE or 40 GbE | 2 x 10 GbE or 40 GbE | 2 x 10 GbE | 2 x 10 GbE | 2 x 10 GbE |
Back-end networking | 2 x 40 GbE | 2 x 40 GbE | 2 x 40 GbE | 2 x 40 GbE | 2 x 40 GbE | 2 x 10 GbE | 2 x 10 GbE | 2 x 10 GbE |
Isilon network topology uses uplinks and peer-links to connect the ToR Cisco Nexus 9000 Series Switches to the VxBlock System.
The following figure provides Isilon network connectivity in a VxBlock System:
The following port channels are used in the Isilon network topology:
Note: More Cisco Nexus 9000 Series Switch pair uplinks start from port channel or vPC ID 4, and increase for each switch pair.
Note: More Cisco Nexus 9000 series switch pair peer-links start from port channel or vPC ID 52, and increase for each switch pair.
Note: Isilon nodes start from port channel or vPC ID 1002 and increase for each LC node.
The following reservations apply for the Isilon topology:
With the Isilon OneFS 8.2.0 operating system, the back-end topology supports scaling a sixth generation Isilon cluster up to 252 nodes.
Isilon uses a spine and leaf architecture that is based on the maximum internal bandwidth and 32-port count of Dell Z9100 switches. A spine and leaf architecture provides the following benefits:
Spine and leaf network deployments can have a minimum of one spine switch and two leaf switches. For small to medium clusters, the back-end network includes a pair redundant ToR switches. Only the Z9100 Ethernet switch is supported in the spine and leaf architecture.
Note: The Cisco Nexus operating system 9.3 is required on the ToR switch to support more than 144 Isilon nodes.
The Isilon backend architecture contains a spine and a leaf layer. The Isilon nodes connect to leaf switches in the leaf layer. The aggregation and core network layers are condensed into a single spine layer. The spine and leaf architecture requires the following conditions:
Scale planning
Scale planning prevents recabling of the backend network. Scale planing makes it easier to upgrade by installing the projected number of spine switches and scaling the cluster by adding leaf switches.
With the use of breakout cables, an A200 cluster can use three leaf switches and one spine switch for 252 nodes. The maximum nodes assume that each node is connected to a leaf switch using a 40 GB port.
The following table provides the switch requirements as the cluster scales:
Maximum nodes | Leaf switches | Spine switches | Leaf uplinks to each spine |
66 | 2–3 | 1 | Up to 9 |
132 | 2–6 | 2 | Up to 5 |
220 | 3–10 | 3 | Up to 3 |
252 * | 2–16 | 5** | 2 |
* Although 16 leaf and 5 spine switches can connect 352 nodes, with the Isilon OneFS 8.2, 252 nodes are supported.
** Four spine switches are not supported. You must have even number of uplinks to each spine. A configuration with four spines and eight uplinks does not have enough bandwidth to support 22 nodes on each leaf.
Scaling guidelines
The uplink bandwidth must be equal to or more than the total bandwidth of all the nodes that are connected to the leaf. For example, each switch has nine downlink connections. Nine downlinks at 40 Gbps require 360 Gbps of bandwidth. That means four 100 Gbps uplink connections to the spine layer should be made from that leaf. The following maximums apply:
OneFS 8.2.0 uses SmartConnect with multiple SmartConnect Service IP (SSIP) per subnet.
The number of SSIP available per subnet depends on the SmartConnect license. SmartConnect Basic allows two SSIPs per subnet, while SmartConnect Advanced allows six SSIPs per subnet. SmartConnect Multi-SSIP is not an extra layer of load balancing for client connections. More SSIPs provide redundancy and reduce failure points in the client connection sequence.
SSIPs are only supported for use by a DNS server. Other implementations with SSIPs are not supported. The SSIP addresses and SmartConnect Zone names must not have reverse DNS entries, also known as pointer records.
The following figure shows the Isilon OneFS 8.2.0 support for multiple SmartConnect Service IP (SSIP) per subnet:
The following list provides the recommendations and considerations for the multiple SSIPs per subnet:
Isilon contains the OneFS operating system to provide encryption, file storage, and replication features.
Encryption
The Isilon OneFS operating system is available as a cluster of Isilon OneFS nodes that contain only self-encrypting drives (SEDs). The system requirements and management of data-at-rest on self-encrypting nodes are identical to the nodes without self-encrypting drives. Clusters of mixed node types are not supported.
Self-encrypting drives store data on an Isilon cluster designed for data-at-rest encryption (D@RE). D@RE on self-encrypted drives occurs when data stored on a device is encrypted to prevent unauthorized data access. All data written to the storage device is encrypted when it is stored, and all data read from the storage device is decrypted when it is read. The stored data is encrypted with a 256-bit data AES encryption key and decrypted in the same manner. OneFS controls data access by combining the drive authentication key with on-disk data-encryption keys. SED options are not included.
File storage
The Isilon OneFS operating system combines the three layers of traditional storage architectures (file system, volume manager, and data protection) into one unified software layer. This creates a single intelligent distributed file system that runs on an Isilon storage cluster.
VxBlock 1000 configures the two front-end interfaces of each node in an LACP port channel. The front-end interfaces are then used using SmartConnect to load balance share traffic across the nodes in the cluster depending on the configuration.
Replication
The Isilon OneFS operating system leverages the SyncIQ licensed feature for replication. SyncIQ is an application that enables you to manage and automate data replication between two Isilon clusters. SyncIQ delivers unique, highly parallel replication performance that scales with the dataset to provide a solid foundation for disaster recovery. SyncIQ can send and receive data on every node in the Isilon cluster so replication performance is increased as your data grows.
InsightIQ
InsightIQ provides performance monitoring and reporting tools to help you maximize the performance of an Dell EMC Isilon scale-out NAS platform. With InsightIQ, you can identify performance bottlenecks in workflows and optimize the amount of high-performance storage required in an environment. InsightIQ provides advanced analytics to optimize applications, correlate workflow and network events, and monitor storage requirements.
Isilon OneFS is available in a perpetual and subscription model, with various bundles.
Subscription model | Type | Software |
Perpetual | Basic bundle | SmartConnect, SnapshotIQ |
Enterprise Bundle | SmartConnect, SnapshotIQ, SmartQuotas | |
Enterprise Advanced Bundle | SmartConnect, SnapshotIQ, SmartQuotas, SyncIQ, SmartPools | |
Subscription | OneFS Essentials Subscription | SmartConnect, SnapshotIQ, SmartQuotas |
OneFS Advanced Subscription | All software except CloudPools | |
OneFS CloudPools third-party Subscription | CloudPools for third party |
The following table lists Isilon license features:
Feature | Details |
CloudPools | Cloud tiering |
Security hardening | Cluster security. STIG, and so on |
HDFS | Hadoop file system protocol |
Isilon Swift | OneFS Swift object API |
SmartConnect Advanced | Cluster connection load balancing |
SmartDedupe | Data deduplication |
SmartLock | WORM data immutability |
SmartPools | Data tiering |
SmartQuotas | Quota management |
SnapshotIQ | File system snapshots |
SyncIQ | Cluster asynchronous replication |
Sixth generation Isilon nodes | Current generation of Isilon cluster hardware |
InsightIQ | Performance monitoring and reporting |
The number of supported Isilon nodes depends on the 10 GbE or 40 GbE ports available in the system.
All node front-end ports (10 GbE or 40 GbE) are placed in LACP port channels. The front-end ports for each of the nodes are connected to a pair of redundant network switches.
For Isilon OneFS 8.1, the maximum Isilon configuration requires two pairs of ToR switches.
The following table indicates the number of nodes that are supported for Isilon OneFS 8.1:
Node | Node scalability | Capacity scale per chassis |
F800 | 4–96, 10 GbE 4–48, 40 GbE | 96 TB |
H600 | 4–96, 10 GbE 4–48, 40 GbE | 72 TB |
H500 | 4–96, 10 GbE 4–48, 40 GbE | 120 TB |
H400 | 4–96, 10 GbE | 120 TB |
A200 | 4–96, 10 GbE | 120 TB |
A2000 | 4–96, 10 GbE | 800 TB |
The following table indicates the number of nodes that are supported for one Isilon OneFS 8.2.1:
Node | Node scalability | Capacity scale per chassis |
F810 | 4–252, 10 GbE 4–252, 40 GbE | 230/460/924 TB |
F800 | 4–252, 10 GbE 4–252, 40 GbE | 96/192/924 TB |
H600 | 4–252, 10 GbE 4–252, 40 GbE | 72/144 TB |
H5600 | 4–252, 10 GbE 4–252, 40 GbE | 800/960 TB |
H500 | 4–252, 10 GbE 4–252, 40 GbE | 120/240/480 TB |
H400 | 4–252, 10 GbE | 120/240/480 TB |
A200 | 4–252, 10 GbE | 120/240/480 TB |
A2000 | 4–252, 10 GbE | 800 TB |
Note: For Isilon OneFS 8.2.1, the maximum Isilon configuration requires a spine and leaf architecture backend 32-port Dell Z9100 switches.