Dell Solutions for Microsoft Azure Stack HCI encompasses various configurations across our portfolio of servers. In each of these platforms, various chassis riser configurations may be available, which determine the total number of network adapters allowed. Often, the BOSS or HBA adapters consume up to two of the available PCIe slots. Additionally, some models may not be able to physically accommodate the additional network adapters that are required for a switchless network topology, such as the AX-640 with a 12x 2.5" hard drive chassis configuration, which has limited PCIe slots. Instead, dual-port rNDC is required for RDMA traffic.
The following RDMA network adapters are validated with three- or four-node switchless mesh topologies:
- Mellanox ConnectX-4 Lx Dual Port 25 GbE
- Mellanox ConnectX-5 Lx Dual Port 25 GbE
- Mellanox ConnectX-6 Lx Dual Port 25 GbE
- QLogic FastLinQ 41262 Dual Port 25 GbE
- Intel E810-XXV Dual Port 10/25GbE SFP28 Adapter
The following table lists the maximum number of RDMA network adapters that can be physically installed into each platform.
|Available PCIe slots for NICs||0-2||1-3||1||2-4|
The following table lists the number of PCIe network adapters that are required per clustered node to create a full mesh switchless interconnect.
|Description||3-node cluster||4-node cluster|
|Single-link full mesh*||1 RDMA network adapter||2 RDMA network adapters|
|Dual-link full mesh||2 RDMA network adapters||3 RDMA network adapters|
*For single-link full mesh interconnect topologies, 10 GbE (or faster) external management network switch connectivity is required. This network provides redundancy for storage network traffic if a storage network link (cable, port, or card) fails.
*Single-link full mesh is not recommended for high-performance configurations where the VM or storage is expected to have high throughput requirements.
If a storage network cable or port fails on single-link full mesh topologies, storage traffic between the affected nodes are rerouted automatically over the management network. Since the management network does not support the RDMA protocol, TCP storage traffic is sent over the management network.
Dell Technologies recommends Direct Attach Copper (DAC) cabling instead of Active Optical Cabling (AOC) and optical transceivers, especially for direct server-to-server storage network connectivity. DAC cabling offers a cost-effective, highly reliable solution for short distance data transmission. AOC and optical transceivers generally require higher power consumption from each server. Optical transceivers may also require the proper polarity of fiber optic cable in order to allow server-to-server network communication. For instances when optical cables are required, ensure that only Dell validated, high temperature optical transceivers are used for Intel QLogic and Mellanox 25 GbE network adapters.