This section provides information about the compute and storage configuration of the ax740sc101 cluster.
The following table details the hardware inventory of each AX-740xd node. We configured each AX node with the same hardware in both sites as required. Only all-flash and all-NVMe storage configurations are currently validated and supported for stretch clustering on Dell Integrated System for Microsoft Azure Stack HCI.
Table 2. AX-740xd node configuration
Resources per AX-740xd | Description |
CPU | 2 x Intel Xeon Gold 6230R CPU @ 2.10 GHz, 26 cores |
Memory | 384 GB |
Storage controller for operating system | BOSS-S1 adapter card |
Physical drives for operating system | 2 x Intel 240 GB M.2 SATA drives configured as RAID 1 |
Storage controller for Storage Spaces Direct (S2D) | HBA330 Mini |
Physical drives for S2D | 8 x 960 GB mixed-use Samsung SATA SSDs |
Network adapters |
|
Operating system | Microsoft Azure Stack HCI, version 20H2 |
The following figure shows the nodes in the ax740sc101 cluster as shown in Windows Admin Center:
Figure 5. Servers view in Windows Admin Center
We configured each site with a single storage pool. The following figures show how we created the volumes in Windows Admin Center:
Figure 6. Creating a volume in Windows Admin Center
Figure 7. Entering volume details in Windows Admin Center
For each data volume, we selected Replicate across two sites and set the replication mode to Synchronous. Because this cluster was set up as active/active, the replication direction could be either From Bangalore to Chennai or From Chennai to Bangalore, depending on which node the active VMs would be created. We created all volumes with two-way mirroring for the best mix of performance and resiliency.
The following tables summarize the replication partnerships that were relevant to the proof-of-concept test scenarios.
Table 3 shows the replication direction From Bangalore to Chennai. Table 4 shows the replication direction From Chennai to Bangalore.
Note: Some volumes, such as the Cluster Performance History volume and its related log files, are not listed in the tables.
Table 3. Replication direction: From Bangalore to Chennai
Bangalore (Site 1) | Chennai (Site 2) | VM disks | ||
Volume name | Size | Volume name | Size | |
ax740xds1N1 | 800 GB | ax740xds1N1_Rep | — | 10 VMs created using VMFleet |
ax740xds1N1_Log | 40 GB | ax740xds1N1_Rep_Log | 40 GB | — |
ax740xds1N2 | 800 GB | ax740xds1N2_Rep | — | 10 VMs created using VMFleet |
ax740xds1N2_Log | 40 GB | ax740xds1N2_Rep_Log | 40 GB | — |
OM | 100 GB | OM-Replica | — | Linux-based VM running Dell OpenManage Enterprise |
OM-Log | 40 GB | OM-Replica-Log | 40 GB | — |
Table 4. Replication direction: From Chennai to Bangalore
Chennai (Site 2) | Bangalore (Site 2) | VM disks | ||
Volume name | Size | Volume name | Size | |
ax740xds2N1 | 800 GB | ax740xds2N1_Rep | — | 10 VMs created using VMFleet |
ax740xds2N1_Log | 40 GB | ax740xds2N1_Rep_Log | 40 GB | — |
ax740xds2N2 | 800 GB | ax740xds2N2_Rep | — | 10 VMs created using VMFleet |
ax740xds2N2_Log | 40 GB | ax740xds2N2_Rep_Log | 40 GB | — |
We created 10 VMFleet VMs on each node for testing, using the following naming convention:
The VMFleet VMs generated a moderate I/O load across the nodes in the cluster throughout the testing scenarios. In addition to the VMFleet VMs, we created a Linux-based virtual appliance named OME-1 on the cluster. OME-1 ran Dell OpenManage Enterprise software and was used to observe the behavior of a real-world application during the failure scenarios.