Home > Networking Solutions > Storage Networking > Guides > SmartFabric Storage Software Deployment Guide > IP network requirements
Special configuration is not required because the network transports NVMe/TCP the way it would transport any other TCP traffic, but there are some recommendations and best practices that will ensure optimal performance of an NVMe/TCP network.
The switches should meet the criteria listed in this KB Article. For a list of switches validated by Dell Technologies, see the NVMe/TCP Switch Interoperability Matrix on Dell's E-lab Interoperability Navigator.
In most solutions, the use of jumbo frames improves the performance of IP SAN traffic. In lieu of analyzing the application that will use NVMe/TCP storage, an MTU of 9000 is recommended on endpoints, with 9216 configured on switches. To compare MTU performance, see the NVMe Transport Performance Comparison White Paper.
The following is a list of points where MTU may be configured and should be aligned. The default is 1500 on SFSS, vSphere, and PowerStore, and most switches.
Component | Configuration point |
SFSS | The global setting applies to all storage interfaces. |
vSphere | vSwitch properties, vDS Advanced Settings, and VMkernels. |
Switches | Interface and global level, depending on the vendor, model, and operating system. |
PowerStore | Cluster level and storage network level. |
PowerMax | IP interface level |
Linux | Bond interfaces, bridge interfaces, and vlan interfaces. Depending on the Linux distribution and architecture, there may be other locations. |
The following are MLAG best practices for NVMe/TCP endpoints.
Port | Recommendation |
Switch ports that connect to ESXi host ports used for accessing NVMe/TCP traffic (I/O) | The NVMe/TCP vmhba cannot fail over, therefore teaming is not used on these ports. Do not configure LAG or MLAG on the switch ports. |
Switch ports that connect to Linux host ports used for NVMe/TCP traffic (I/O) | While these ports can be bonded, the best practice is that they are not bonded. Native multipathing, or other multipathing solutions should be leveraged. |
Switch ports that connect to PowerStore ports used for NVMe/TCP traffic (I/O) | Depending on which ports are used for NVMe/TCP, MLAG may or may not be used. Follow the guidelines provided in the Networking Guide for PowerStore models on the PowerStore: Info Hub - Product Documentation and Videos page. |
Switch ports that connect to PowerMax ports used for NVMe/TCP traffic (I/O) | Do not configure LAG/MLAG on the switch ports. Cross connects between switches and nodes is not recommended. |
Switch ports that connect to ESXi or Linux host ports that host the SFSS VM and will not be used to transport NVMe/TCP traffic | These ports can be Active/Active or Active/Standby, with MLAG as optional. |
For ports connected to NVMe/TCP Endpoints, priority flow control (IEEE 802.1p) should be off for receive and transmit.
Port | Recommendation |
Switch ports that connect to NVMe/TCP host ports used for accessing NVMe storage on a subsystem | NVMe/TCP VLANs are tagged. |
Switch ports that connect to PowerStore ports used for NVMe/TCP traffic | NVMe/TCP VLANs are tagged. Follow the guidelines provided in the Networking Guide for PowerStore models on the PowerStore: Info Hub - Product Documentation and Videos page. |
Switch ports that connect to PowerMax ports used for NVMe/TCP traffic | NVMe/TCP VLANs are tagged. |
Switch ports that connect to NVMe/TCP host ports that will be used to access SFSS and will not be used to transport NVMe/TCP traffic | NVMe/TCP VLANs are tagged. |
Avoid congestion by leveraging the following best practices:
Storage traffic and SFSS control traffic must not be firewalled. Administrators may want to firewall administrator traffic to the SFSS management interface.
Port | Protocol | Purpose | Description | Source | Destination |
22 | SSH | ESXi console access through SSH | ESXi console access through SSH | Admin | SFSS Management Interface |
49 | TCP | TACACS+ | Remote user authentication | SFSS Management | TACACS+ Server |
443 | HTTPS | HTTPS access to SFSS UI, RestAPI | User access to SFSS through web UI or RestAPI | Admin | SFSS Management Interface |
1812 | UDP | RADIUS | Remote user authentication | SFSS Management | RADIUS Server |
4420 | TCP | NVMe/TCP I/O Controller | Data traffic between the host and the subsystem | Host NVMe/TCP Interface | I/O controller on storage subsystems |
5353 | UDP | Automated Discovery using mDNS | Endpoints and CDC send mDNS queries to discover each other in a VLAN | CDC, Host and Storage Endpoints | Multicast addresses: IPv4 - 224.0.0.251, IPv6 - ff02::fb |
8009 | TCP | NVMe/TCP Discovery | Host registration | Host NVMe/TCP Interface | SFSS CDC Interface |
8009 | TCP | NVMe/TCP Discovery | Subsystem registration | SFSS CDC Interface | Direct Discovery Controller on storage subsystems |