Dell Engineering created the following Network ATC intents on each Azure Stack HCI cluster node as shown in the following figure:
Intent name | Intent type | Network adapter | Adapter label |
Management_Compute | Management and compute | NVIDIA ConnectX-6 Lx Dual-Port 10/25GbE SFP28 OCP NIC 3.0 | NIC-1 |
S2D | Storage | NVIDIA ConnectX-6 Lx Dual Port 10/25GbE SFP28 Adapter, PCIe Low Profile Add-In | NIC-3 |
PowerFlex_SDS | Compute | NVIDIA ConnectX-6 Lx Dual Port 10/25GbE SFP28 Adapter, PCIe Low Profile Add-In | NIC-6 |
The network topology setup was a non-converged network configuration. In this configuration, management/compute traffic was separated from S2D and PowerFlex storage traffic. The PowerFlex_SDS intent was configured to use the compute intent type. The following lists other pertinent details about the PowerFlex_SDS intent:
- The compute intent type was chosen because the Azure Stack HCI operating system only allows one intent to be configured with the storage intent type. The storage intent type is reserved for use by the S2D intent.
- By default, all intents with the compute intent type had Single Root I/O Virtualization (SR-IOV) enabled. However, this feature was not used for the integration.
- Quality of Service (QoS) policies were not configured on the PowerFlex_SDS intent because there was no potential for contention with other traffic types.
- RDMA was enabled by default but only used on the S2D intent. It was not needed on the PowerFlex_SDS intent.
- Jumbo Frames were enabled and set to 9014.
- The network adapter port connectivity speed on all physical ports is set to 25 GbE.
A summary of this information can be found in the table below:
Feature | Default state | Configured state |
RDMA | Enabled but not used | Enabled but not used |
SR-IOV | Enabled but not used | Enabled but not used |
QoS | Default (No Configuration) | Default (No Configuration) |
Jumbo Frames | Disabled | Enabled (MTU:9014) |
As shown in the following diagram, the PowerFlex_SDS intent was mapped to the two ports on physical NIC-6 on each MC node. Each port was configured on a separate, dedicated, non-routable VLAN. These VLANs were configured in the network infrastructure to facilitate the communications between the MC nodes and PowerFlex nodes.
The following table shows how the vNICs were configured with affinity to specific ports on NIC-6:
Intent | vNIC | Physical NIC | Port | VLAN |
PowerFlex_SDS | SDS-1 | NIC-6 | 1 | 1001 |
PowerFlex_SDS | SDS-2 | NIC-6 | 2 | 1002 |
Should a port failure occur, the virtual NIC bound to that port could fail over and use the other physical port. These vNICs were labeled SDS-1 and SDS-2, as the Storage Data Client that is installed on the Azure Stack HCI operating system facilitates the communication down these vNICs to the PowerFlex cluster. On each node, SDS-1 was configured with the first SDS VLAN ID, and SDS-2 was configured with the second SDS VLAN ID. Both had Jumbo Frames enabled, and the MTU was set to 9014.