Home > Workload Solutions > High Performance Computing > White Papers > HPC Software-Defined Storage with PixStor > NVMe Tier Configuration
Each PowerEdge R650 server has 10 NVMe devices directly connected to the CPU in Socket 1 (so this is not a balanced configuration in terms of NUMA domains), and two HCAs Mellanox ConnectX-6 Single Port VPI HDR adapters (one per CPU socket). For the configuration characterized, Dell AG 1.6TB (PM1735) PCIe4 devices were used, since they have the same read and write performance for large blocks and fairly-good random I/O performance for small transfers, which are nice features when trying to scale and estimate the number of pairs needed to meet the requirements of this flash tier. Nevertheless, any NVMe device supported on the PowerEdge R650 is supported for the NVMe nodes.
Those NVMe drives are configured as eight RAID 10 devices across a pair of servers, using NVMesh as the NVMe over Fabric component to allow data redundancy — not only at the devices level, but also at the server level. In addition, when any data goes into or out of one of those RAID10 devices, all 20 drives in both servers are used, increasing the bandwidth of the access to that of all the drives. Therefore, the only restriction for these NVMe tier servers is that they must be used in pairs.
The PowerEdge R650s tested in this configuration have two CX6 VPI HDR 200 Gbps IB adapters. Both CX6 interfaces are used actively to move data, sync the RAID 10 NVMe over fabric and as the connectivity for the file system to clients. In addition, they provide hardware redundancy at the adapter, port, cable level, but performance is affected if only one adapter is working. A follow-on document will describe in detail the options for the new generation of the NVMe tier, including a performance characterization of that tier.