Home > Storage > PowerStore > Storage Admin > Dell PowerStore: Best Practices Guide > PowerStore front-end ports
PowerStore supports Ethernet connectivity through ports on the embedded module, and on optional I/O modules. PowerStore supports Fibre Channel connectivity through ports on optional I/O modules.
The fastest I/O module should be installed in slot 0. On PowerStore 1000 – PowerStore 9200 models, I/O module slot 0 is 16-lane PCIe Gen3 while I/O module slot 1 is 8-lane. If the system is configured with a 100 GbE I/O module, it should be installed in slot 0 to enable the highest bandwidth. If a second 100 GbE I/O module is required, it can be installed in slot 1. Both I/O module slots on PowerStore 500 models are 8-lane PCIe, and the 100 GbE I/O module is not supported on this platform.
When a 32Gb Fibre Channel I/O module is being installed, it is recommended to always use I/O module slot 0 first unless the system currently contains or in the future will contain a 100 GbE I/O Module. In this case, the Fibre Channel I/O module can be installed in slot 1.
PowerStore Fibre Channel ports support speeds for 32 Gb/s, 16 Gb/s, 8 Gb/s, and 4 Gb/s. This speed depends on the SFP used and the switchport or HBA that is connected. Because higher speeds allow for greater MBPS and IOPS capabilities, it is recommended that you use the highest speed supported by the environment.
Fibre Channel ports are available on I/O modules that are inserted into I/O module slots on the nodes. The Fibre Channel I/O module is 16-lane PCIe Gen3. On PowerStore 1000 – 9200 models, I/O module slot 0 is also 16-lane while I/O module slot 1 is 8-lane. If Fibre Channel I/O modules are installed in both I/O module slots, it is recommended to cable the ports in I/O module slot 0 first, due to the PCIe difference. The PCIe lanes in I/O module slot 1 are only a limiting factor for total MBPS, and only when all four ports on the Fibre Channel I/O module are operating at 32 Gb/s. Both I/O module slots on PowerStore 500 are 8-lane PCIe and therefore, there is no slot preference.
The NVMe over Fibre Channel (NVMe/FC) protocol provides connectivity using the same Fibre Channel ports but can decrease the transport latency between PowerStore and the host. Note that all parts of the network, including switches and HBAs, must support NVMe over Fibre Channel.
PowerStore optical Ethernet ports support speeds of up to 25 Gb/s, based on the SFP that is used. Copper Ethernet ports support speeds of up to 10 Gb/s. Because higher speeds allow for greater MBPS and IOPS capabilities, it is recommended that you use the highest speed supported by your environment.
With PowerStoreOS 3.0, a new 2-Port Ethernet card is introduced that supports speeds of up to 100 Gb/s. This 100 GbE card is supported on PowerStore 1000-9200 models in I/O Module 0 slot.
Jumbo frames (MTU 9000) are recommended for increased network efficiency. Jumbo frames must be supported on all parts of the network between PowerStore and the host.
Map additional Ethernet ports for iSCSI to increase system MBPS capabilities. Enable Jumbo frames for iSCSI by setting the Cluster MTU to 9000 and set the storage network MTU to 9000.
The embedded module 4-port card and the optional network I/O modules are 8-lane PCIe Gen3. When more than two 25 GbE ports are used, these cards are oversubscribed for MBPS. To maximize MBPS scaling in the system, consider cabling and mapping the first two ports of all cards in the system first. Then, cable and map other ports as needed.
When PowerStore models that are in unified mode are used for both iSCSI and file access, it is recommended that you use different physical ports for both NAS and iSCSI.
The NVMe over TCP (NVMe/TCP) protocol provides connectivity using the same physical Ethernet ports as iSCSI. NVMe/TCP can be enabled on the same Storage Network as iSCSI or different Storage Networks can be created to isolate iSCSI and NVMe/TCP traffic.
The embedded module 4-port card and the optional network I/O modules are 8-lane PCIe Gen3. When more than two 25 GbE ports are used, these cards are oversubscribed for MBPS. To maximize MBPS scaling in the system, consider cabling and mapping the first two ports of all cards in the system first. Then, cable and map other ports as needed.
When PowerStore models that are in unified mode are used for both NVMe/TCP and file access, it is recommended that you use different dedicated physical ports for both NAS and NVMe/TCP.
Dell SmartFabric Storage Software (SFSS) provides Centralized Discovery Controllers (CDCs) for NVMe/TCP Endpoints. These CDCs facilitate endpoint discovery, registration, soft zoning, and event notifications. With SFSS, Dell Technologies provides the industry's first comprehensive connectivity automation solution for NVMe/TCP endpoints such as Dell PowerEdge, Dell PowerStore, and Dell PowerMax. For more information about SFSS support, see the SmartFabric Storage Software Deployment Guide on the Storage Networking Info Hub.
It is recommended that you use bonded ports for NAS connectivity. Prior to PowerStoreOS 3.0, NAS servers automatically created their interfaces on the two bonded ports on the embedded module 4-port card. With PowerStoreOS 3.0, user-defined link aggregations can be used to reserve different physical ports for file access only. In PowerStoreOS 4.0, user-defined link aggregations also support storage iSCSI and replication connectivity. For the highest performance and availability from any aggregated ports, it is recommended that you configure link aggregation across the corresponding switch ports.
Enable Jumbo frames for NAS by setting the cluster MTU to 9000.
If the PowerStore is also providing block access through iSCSI or NVMe-TCP, or asynchronous replication over Ethernet, it is recommended that you use different physical ports for NAS than the ports which are tagged for replication or storage networks.
PowerStoreOS 3.5 adds Fail-Safe Networking (FSN) support for file interfaces. FSN is a high-availability feature that enables configuring ports in a primary/backup configuration. Under normal circumstances, the primary ports are designated as active and are used to service IO. If all primary ports of an FSN go offline, the backup ports automatically become active and continue to service IO. This enables redundancy in case of port, cable, or switch failure. When the primary ports are restored, the system automatically makes the primary ports active again. For optimal performance in the event of a failure, it is suggested that the configurations of the active and standby ports or bonds selected be consistent. For more information about FSN, see the Dell PowerStore: File Capabilities white paper on the PowerStore Info Hub.