PowerFlex 3.6.0 is built in the lab with four PowerFlex HCI nodes. PowerFlex Manager is used to automatically create all required PowerFlex network components and provision the PowerFlex cluster on VMware vSphere 7.0 U2. PowerFlex SDS, MDM, and SDC components are installed on each of the HCI nodes by the PowerFlex Manager cluster creation job. The SDS aggregates and serves as raw local storage in each node and shares that storage as a part of the PowerFlex cluster. A single protection domain is created from the drives on these HCI nodes. Then two storage pools are configured, and multiple volumes are carved out to meet the SQL Managed Instance storage requirements. The volumes are mapped to compute nodes and later mapped to the virtual machines.
Table 2. The configuration details of the PowerFlex System in the lab
Cluster design elements | Description |
Cluster node model | PowerEdge R640 |
Number of cluster nodes | 4 |
CPU | 2 x Intel(R) Xeon(R) Gold 6132 CPU @ 2.60 GHz |
Memory | 384 GB (12 x 32 GB) |
Network | rNDC - Intel(R) 10 GbE 4P X710 rNDC PCIe - 2 x MLNX 25 GbE 2P ConnectX4LX Adapter BOSS card |
Network topology | PowerFlex Proprietary |
Volume resiliency | 2 |
Usable storage capacity | 10 x 1788.5 GB (SSD) per Node |
Note: The solution was validated in an engineering lab using PowerFlex HCI platform with a VMware vSphere environment. However, the implementation and validation that this paper describes are applicable for a PowerFlex HCI and two-layer platform with virtualized and bare metal configurations. See, Microsoft Azure Arc on Dell EMC PowerFlex for details. The protection domain, storage pool, and volumes for PowerFlex platform can be configured according to requirements. For more information, see PowerFlex Storage definitions.