Dell Integrated System for Microsoft Azure Stack HCI requires a reliable, high bandwidth, low latency network connection between each clustered node for storage network traffic. Dell Technologies recommends switchless storage to avoid the extra expense of high-speed network switches in locations such as ROBOs (Remote Office Branch Office).
Take cluster size into careful consideration before deploying a full mesh switchless Storage Spaces Direct cluster. Although it is possible, expanding a switchless cluster by adding a node may require installing additional network adapter cards. Creating the additional networks within Windows Server 2019 while the cluster remains running is not advised and the PowerShell scripts in the guide are not designed for expansion purposes. Dell Technologies has not validated expanding switchless clusters. To expand a switchless cluster, Dell Technologies recommends deploying the cluster from scratch, to properly follow cabling procedures and to prevent any operating system misconfiguration.
Microsoft announced support for 2-node switchless clusters for Windows Server Azure Stack HCI and Failover Clustering. Most recently, Microsoft has extended this support to three and four node clusters, using a full mesh storage network topology. A full mesh interconnect requires direct network cable connections between every node of the cluster. These direct connections can be configured either using single or dual-link full mesh topologies. For redundancy and performance purposes, dual-link direct network connections between every node of the cluster are recommended.
A full mesh switchless Storage Spaces Direct cluster provides these advantages:
- Data center environments that have an existing 1 GbE network switch infrastructure reduce costs that are associated with upgrading to 10 GbE (or faster) switches, because 10 GbE is a storage network minimum requirement of Storage Spaces Direct.
- Data center environments that have an existing 10 GbE network switch infrastructure would gain increased network throughput for storage traffic, because there are 25 GbE network adapter connections between each clustered node. Note: Management, VM, and any other external network traffic will still require connectivity to a network switch.