Figure 4 depicts the cluster’s physical networking as built in the lab. We set up a scalable, fully converged network topology with management, VM, and RDMA storage traffic traversing two 100 GbE Mellanox CX-6 adapter ports. With this topology, Data Center Bridging (DCB) configuration was required on the S5232F-ON ToR switches. We also required Quality of Service (QoS) configuration in the host operating system. Figure 5 illustrates the virtual network configuration on the host operating system using Switch Embedded Teaming (SET).
Ancillary services required for cluster operations such as Active Directory, DNS, and a file server or cloud-based witness are outside the scope of this white paper. For information on all the steps required to deploy this cluster, refer to our and guide.
Note: Out of band (OOB) connections from the iDRACs to a management switch are not included in the diagram.
Tables 1 and 2 provide additional detail about the cluster configuration and AX-7525 specifications as built in the lab. We chose the three-way mirror resiliency type for the volumes we created with VMFleet because of its superior performance compared to erasure coding options in Storage Spaces Direct. The raw storage capacity of the cluster was a little over 150 TB. This resulted in just under 50 TB of usable storage capacity due to three-way mirroring’s capacity efficiency of 33 percent. Microsoft and Dell Technologies also recommend leaving reserve capacity equivalent to the capacity of one drive per server.
Resources per cluster node
Dual-socket AMD EPYC 7742 64-Core Processor
1 TB 3200 MHz DDR4 RAM
Storage controller for operating system
BOSS-S1 adapter card
Physical drives for operating system
2 x M.2 480 GB SATA drives configured as RAID 1
Physical drives for Storage Spaces Direct
24 NVMe drives (PCIe Gen 4)
Network adapter for management, VM, and storage traffic
1 x Mellanox ConnectX-6 DX Dual Port 40/100 GbE QSFP56 Adapter
Microsoft Azure Stack HCI, version 20H2