Home > Workload Solutions > High Performance Computing > White Papers > Dell Validated Design for HPC NG-Stor Storage - Joint Solution with Kalray > NVMe Nodes
Each PowerEdge R660 server has 14 NVMe E3.S PCIe 5 devices directly attached, six to the CPU in socket 1 and eight for the CPU in socket 2, as shown in the following figure. Therefore, this configuration is not a balanced configuration in terms of NUMA domains. Riser configuration 2 is used, with one x16 Gen 4 in slot 2, and two x16 Gen 5 in slots 1 and 3, populated with HCAs Mellanox ConnectX-7 Single Port NDR 400 Gbps adapters (one per CPU socket). Any NVMe device supported on the PowerEdge R660 server is supported for these NVMe nodes. Both CX7 interfaces are used to move data, replicate the NVMe NSDs, and provide connectivity from the file system to clients. In addition, they provide hardware redundancy at the adapter, port, and cable level. Performance is affected if only one CX7 adapter is working. The LOM port 1 and iDRAC dedicated port are connected to the 1 GbE management network.
Figure 5. PowerEdge R660 NVMe node slot allocation