Home > Workload Solutions > High Performance Computing > White Papers > Dell Validated Design for HPC NG-Stor Storage - Joint Solution with Kalray > NVMe Nodes
Each PowerEdge R7625 server has 16 NVMe E3.S PCIe 5 devices directly attached, eight to each CPU. Riser configuration 5-1 is used, with three x16 Gen 4 slots not used, and three x16 Gen 5 slots used. From those slots used, slots 2 and 7 are populated with HCAs Mellanox ConnectX-7 Single Port NDR 400 Gbps adapters (one per CPU socket). Therefore, this configuration is a balanced configuration in terms of NUMA domains. Any NVMe device supported on the PowerEdge R7625 server is supported for these NVMe nodes. Both CX7 interfaces are used actively to move data, replicate the NVMe NSDs, and provide connectivity from the file system to clients. In addition, they provide hardware redundancy at the adapter, port, and cable level; performance is affected if only one CX7 adapter is working. The LOM port 1 and iDRAC dedicated port are connected to the 1 GbE management network.
The R7625 server was not evaluated for performance because the NVMe PCIe 5 version was not available when we procured hardware for this effort. Therefore, it is only mentioned up to this point and a future update for the solution will include any performance evaluation.
Figure 7. R7625 NVMe node slot allocation
To maintain homogeneous performance across the NVMe nodes and allow proper striping data across nodes in this tier, do not mix different server models in the same NVMe tier. However, multiple NVMe tiers each with different servers and accessed using different filesets are supported. Mixing NVMe PCIe5 devices with lower performant PCIe4 devices from previous generations of the solution is not recommended for the solution, and it is not supported for the same NVMe tier.