Home > Workload Solutions > High Performance Computing > White Papers > Dell Validated Design for HPC pixstor Storage—Joint Solution with Kalray > Clients test bed
The following table lists the clients that we used to characterize the NVMe tier:
Component | Details |
Number of client nodes | 16 |
Client node | C6420 |
Processors per client node | 11 nodes with 2 x Intel Xeon Gold 6230 20 Cores @ 2.1 GHz 5 nodes with 2 x Intel Xeon Gold 6248 20 Cores @ 2.40 GHz |
Memory per client node | 6230 nodes with 12 x 16 GiB 2933 MT/s RDIMMs (192 GiB) 6248 nodes with 12 x 16 GiB 2666 MT/s RDIMMs (192 GiB) |
BIOS | 2.8.2 |
Operating system | CentOS 8.4.2105 |
Operating system kernel | 4.18.0-305.12.1.el8_4.x86_64 |
pixstor Software | 6.0.3.1-1 |
Spectrum Scale (GPFS) | 5.1.3-0 |
OFED Version | MLNX_OFED_LINUX-5.4-1.0.3.0 |
CX6 FW | 8 nodes with Mellanox CX6 single port: 20.32.1010 8 nodes with Dell OEM CX6 single port: 20.31.2006 |
Because there were only 16 compute nodes available for testing, when a higher number of threads was required, those threads were equally distributed on the compute nodes (that is, 32 threads = 2 threads per node, 64 threads = 4 threads per node, 128 threads = 8 threads per node, 256 threads =16 threads per node, 512 threads = 32 threads per node, 1024 threads = 64 threads per node). The intention was to simulate a higher number of concurrent clients with the limited number of compute nodes. Because the benchmarks support a high number of threads, a maximum value up to 512 was used based on the number of cores of the client nodes (except for IOzone sequential and random tests with N clients – N files). Using more threads than cores might cause excessive context switching and other related side effects that affect performance results.