Basic configuration
This section describes the host network configuration and network cards that are required to configure a basic stretched cluster. The purpose of this topology is to keep the host and inter-site configuration simple with little or no change to a standard cluster networking architecture.
Here we use two 25 GbE NICs for each host on both sites. One NIC is dedicated to intra-site storage traffic, similar to a stand-alone Storage Spaces Direct environment. The second NIC is used for management, compute, and Storage Replica traffic. To ensure management traffic is not bottlenecked due to high traffic on the Replica network, we request the customer network team to throttle traffic between the two sites using firewall or router QoS rules. It is recommended that the network is throttled to 50 percent of the capacity of the total number of network cards supporting the management NIC team.
The management network is the only interface between the two sites. Because only one network pipe is available between the hosts on Site A and Site B, you will see the following warning in the cluster validation. This is an expected behavior.
Node SiteANode1.Test.lab is reachable from Node SiteBNode1.Test.lab by only one pair of network interfaces. It is possible that this network path is a single point of failure for communication within the cluster. Please verify that this single path is highly available, or consider adding additional networks to the cluster.
Site A | Site B | Type of traffic | |
Management/Replica Traffic | 192.168.100.0/24 | 192.168.200.0/24 | L2/L3 |
Intra-site Storage (RDMA) - 1 | 192.168.101.0/24 | 192.168.201.0/24 | L2 |
Intra-site Storage (RDMA) - 2 | 192.168.102.0/24 | 192.168.202.0/24 | L2 |
VMNetwork/Compute Network | As per customer environment | As per customer environment | L2/L3 |
The following figure shows the network topology of a basic stretched cluster:

High throughput configuration
In this topology we use two 25 GbE NICs and two additional 1/10/25 GbE ports from each host to configure a high throughput stretched cluster. One NIC is dedicated for intra-site RDMA traffic, similar to a stand-alone Storage Spaces Direct environment. The second NIC is used for replica traffic. SMB Multichannel is used to distribute traffic evenly across both replica adapters and it increases network performance and availability. SMB Multichannel enables the use of multiple network connections simultaneously, and facilitates the aggregation of network bandwidth and network fault tolerance when multiple paths are available. For more information, see Manage SMB Multichannel.
The Set-SRNetworkConstraint cmdlet is used to ensure replica traffic flows only through the dedicated interfaces and not through the management interface. Run this cmdlet once for each volume.
IP address schema
The following table shows the IP address schema:
Site A | Site B | Type of traffic | |
Management | 192.168.100.0/24 | 192.168.200.0/24 | L2/L3 |
Intra-site Storage (RDMA) - 1 | 192.168.101.0/24 | 192.168.201.0/24 | L2 |
Intra-site Storage (RDMA) - 2 | 192.168.102.0/24 | 192.168.202.0/24 | L2 |
Replica - 1* | 192.168.111.0/24 | 192.168.211.0/24 | L2/L3 |
Replica - 2* | 192.168.112.0/24 | 192.168.212.0/24 | L2/L3 |
VMNetwork | As per customer environment | As per customer environment | L2/L3 |
Cluster IP | 192.168.100.100 | 192.168.200.100 | L2 |
*Static routes are needed on all hosts on both sites to ensure the 192.168.111.0/24 network can reach 192.168.211.0/24 and the 192.168.112.0/24 network can reach 192.168.212.0/24. Static routes are needed in this network topology because we have three network pipes between Site A and Site B. Network traffic on Management uses the default gateway to traverse the network, while Replica network uses static routes on the hosts to reach the secondary site. If your ToR switches do not have BGP configured, static routes are needed on them also.
The following figure shows the network topology of an advanced stretched cluster:

2+2 nodes back-to-back switchless storage stretched topology
Here we use 2+2 Nodes where the storage NICs are cabled to create a dual-link direct interconnect between two nodes (also known as back-to-back). The two-node, back-to-back storage topology supports all available RDMA-based network interface controllers that are supported in the solution.
The purpose of this topology is to keep the host and inter-site configuration simple with switchless storage networking. This is a cost-effective solution that includes fault tolerance at the cluster level but tolerates northbound connectivity interruptions if the single physical switch fails or requires maintenance.
In this scenario, we require only a single TOR switch network reference pattern that you can use to deploy the Azure Stack HCI solution. The routing services such as BGP will need to be configured on the firewall device on top of the TOR switch if it does not support L3 services. Network security features such as micro-segmentation and QoS does not require extra configuration on the firewall device, as they're implemented on the virtual switch.
The numbered blue lines indicate the recommended cabling order, beginning with the network interface controller in the lowest slot number (port 1). Perform the same steps for the other two nodes as well in the cluster. After cabling storage, connect all nodes, LOM/rNDC/OCP, and ports 1 and 2 to a management/VM network. Connect the management NICs to the TOR switch and team up these NICs to configure the management/stretch network ATC Intent. Similarly, we can configure the IPs to the RDMA NICs and create the Storage Intent. Configure the Cluster Fault Domains to act as a stretched environment.