Each of the variants in Microsoft HCI Solutions from Dell Technologies supports a specific type of network connectivity. The type of network connectivity determines the solution integration requirements.
- For information about all possible topologies within both fully converged and nonconverged solution integration, including with switchless storage networking and host operating system network configuration, see Network Integration and Host Network Configuration Options.
- For switchless storage networking, install the server cabling according to the instructions detailed in Cabling Instructions.
- For sample switch configurations for these network connectivity options, see Sample Network Switch Configuration Files.
Fully converged network connectivity
In the fully converged network configuration, both storage and management/VM traffic use the same set of network adapters. These adapters are configured with Switch-Embedded Teaming (SET). When using RoCE in a fully converged network configuration, you must configure Data Center Bridging (DCB).
The following table describes when to configure DCB based on the chosen network card and switch topology:
|Network card on node||Fully converged switch topology||Nonconverged switch topology||Switchless topology|
|Mellanox (RoCE)||DCB (required)||DCB (required) for storage adapters only||No DCB/QoS required|
|QLogic (iWARP)||DCB (required for All-NVMe configurations only)||No DCB||No DCB/QoS required|
Disable-NetAdapterQos < nicName>command.
Nonconverged network connectivity
In the nonconverged network configuration, storage traffic uses a dedicated set of network adapters either in a SET configuration or as physical adapters. A separate set of network adapters is used for management, VM, and other traffic classes. In this connectivity method, DCB configuration is optional for QLogic (iWARP), but mandatory for Mellanox (RoCE) adapters.
The switchless storage networking deployment model also implements nonconverged network connectivity without the need for network switches for storage traffic.
Network connectivity for single node clusters
A single node cluster has adapters configured for only management and VM traffic. However, Azure Stack HCI Engineering still recommends configuring a virtual network interface for Live Migration in case you intend to use Shared Nothing Live Migration to move workloads to other clusters at a later time. You can also use the adapter to configure application or VM replication. For guidance on network configurations/topologies, see Network Integration and Host Network Configuration Options.