Home > Workload Solutions > High Performance Computing > White Papers > Dell Validated Design for HPC pixstor Storage—Joint Solution with Kalray > Benchmarks and test beds
To characterize the different components of the pixstor solution, we used the hardware specified in the last column of Table 2, including the optional HDMD. To assess the solution performance, we selected the following benchmarks:
For these benchmarks, the test bed included the clients described in the following table, except for testing the gateway nodes:
Component | Description |
Number of client nodes | 16 |
Client node | C6420 |
Processors per client node | 8 nodes with 2 x Intel Xeon Gold 6230 20 Cores @ 2.1 GHz 8 nodes with 2 x Intel Xeon Gold 6148 20 Cores @ 2.40 GHz |
Memory per client node | 8 nodes (6230) with 12 x 16 GiB 2933 MT/s RDIMMs 8 nodes (6148) with 12 x 16 GiB 2666 MT/s RDIMMs |
BIOS | 2.12.2 |
Operating system | CentOS 8.4 |
Operating system kernel | 4.18.0-305.12.1.el8_4.x86_64 |
pixstor software | 6.0.3 |
Spectrum Scale (GPFS) | 5.1.3-1 |
OFED version | MLNX_OFED_LINUX-5.6-1.0.3.3 |
CX6 FW | 8 nodes (CPU 6230) with Mellanox CX6 single port: 20.31.1014 8 nodes (CPU 6148) with Dell OEM CX6 single port: 20.28.4512 |
Because the number of compute nodes available for testing was only 16, when a higher number of threads was required, those threads were equally distributed on the compute nodes (that is, 32 threads = 2 threads per node, 64 threads = 4 threads per node, 128 threads = 8 threads per node, 256 threads = 16 threads per node, 512 threads = 32 threads per node, 1024 threads = 64 threads per node). The intention was to simulate a higher number of concurrent clients with the limited number of compute nodes. Because the benchmarks support a high number of threads, a maximum value up to 512 was used, based on the number of cores of the client nodes (except for IOzone N clients to N files). Using more threads than cores might cause excessive context switching and other related side effects that affect performance results.