Home > Workload Solutions > High Performance Computing > White Papers > HPC High-Performance Storage Solution for BeeGFS > Storage benchmark
The StorageBench benchmark measures the streaming throughput of the underlying file system and devices independent of the network performance. This benchmark is started and monitored with the beegfs-ctl tool which is provided by the beegfs-utils package. To simulate the client IO, this benchmark generates read/write operations locally on the servers without any client communication.
For ME5, running with a lower number of threads per target increased throughput compared to the ME4. The following example starts a write benchmark on all targets of all BeeGFS storage servers with an IO block size of 1m, using 3 threads per target, each of which will write 83.333GB of data to its own file. The benchmark in this configuration will write an aggregate total of 3 threads * 32 storage targets * 83.333 GB ≈ 8TB.
# beegfs-ctl --storagebench --alltargets --write --blocksize=1m --size= 83333M --threads=3
Write storage benchmark was started. You can query the status with the --status argument of beegfs-ctl.
Server benchmark status: Running: 2 |
To query the benchmark status/result of all the targets, the following command is run:
# beegfs-ctl --storagebench --alltargets --status --verbose
Server benchmark status: Finished: 2
Write benchmark results: Min throughput: 904337 KiB/s nodeID: storageB [ID: 2], targetID: 30 Max throughput: 1001486 KiB/s nodeID: storageA [ID: 1], targetID: 13 Avg throughput: 946137 KiB/s Aggregate throughput: 30276385 KiB/s
List of all targets: 1 939170 KiB/s nodeID: storageA [ID: 1] 2 996384 KiB/s nodeID: storageA [ID: 1] 3 941789 KiB/s nodeID: storageA [ID: 1] 4 995194 KiB/s nodeID: storageA [ID: 1] 5 965538 KiB/s nodeID: storageA [ID: 1] 6 997436 KiB/s nodeID: storageA [ID: 1] 7 939469 KiB/s nodeID: storageA [ID: 1] 8 997001 KiB/s nodeID: storageA [ID: 1] 9 1000081 KiB/s nodeID: storageA [ID: 1] 10 975687 KiB/s nodeID: storageA [ID: 1] 11 974937 KiB/s nodeID: storageA [ID: 1] 12 991752 KiB/s nodeID: storageA [ID: 1] 13 1001486 KiB/s nodeID: storageA [ID: 1] 14 975145 KiB/s nodeID: storageA [ID: 1] 15 972958 KiB/s nodeID: storageA [ID: 1] 16 991755 KiB/s nodeID: storageA [ID: 1] 17 904682 KiB/s nodeID: storageB [ID: 2] 18 910179 KiB/s nodeID: storageB [ID: 2] 19 917154 KiB/s nodeID: storageB [ID: 2] 20 908761 KiB/s nodeID: storageB [ID: 2] 21 918233 KiB/s nodeID: storageB [ID: 2] 22 929526 KiB/s nodeID: storageB [ID: 2] 23 904650 KiB/s nodeID: storageB [ID: 2] 24 927542 KiB/s nodeID: storageB [ID: 2] 25 915285 KiB/s nodeID: storageB [ID: 2] 26 918840 KiB/s nodeID: storageB [ID: 2] 27 904625 KiB/s nodeID: storageB [ID: 2] 28 904427 KiB/s nodeID: storageB [ID: 2] 29 918312 KiB/s nodeID: storageB [ID: 2] 30 904337 KiB/s nodeID: storageB [ID: 2] 31 917674 KiB/s nodeID: storageB [ID: 2] 32 916376 KiB/s nodeID: storageB [ID: 2] |
From the output we can infer that the theoretical maximum write performance that can be achieved is 31.01 GB/s and that the storage targets and connections are properly configured.
The following example starts a read benchmark on all targets of all BeeGFS storage servers with an IO block size of 1m:
# beegfs-ctl --storagebench --alltargets --read --blocksize=1m --size= 83333M --threads=3
Read storage benchmark was started. You can query the status with the --status argument of beegfs-ctl.
Server benchmark status: Running: 2
# beegfs-ctl --storagebench --alltargets --status --verbose
Server benchmark status: Finished: 2
Read benchmark results: Min throughput: 689416 KiB/s nodeID: storageA [ID: 1], targetID: 1 Max throughput: 1190017 KiB/s nodeID: storageA [ID: 1], targetID: 12 Avg throughput: 942333 KiB/s Aggregate throughput: 30154675 KiB/s
List of all targets: 1 689416 KiB/s nodeID: storageA [ID: 1] 2 939904 KiB/s nodeID: storageA [ID: 1] 3 959275 KiB/s nodeID: storageA [ID: 1] 4 981542 KiB/s nodeID: storageA [ID: 1] 5 976171 KiB/s nodeID: storageA [ID: 1] 6 999804 KiB/s nodeID: storageA [ID: 1] 7 995303 KiB/s nodeID: storageA [ID: 1] 8 991671 KiB/s nodeID: storageA [ID: 1] 9 1163885 KiB/s nodeID: storageA [ID: 1] 10 1185279 KiB/s nodeID: storageA [ID: 1] 11 1178187 KiB/s nodeID: storageA [ID: 1] 12 1190017 KiB/s nodeID: storageA [ID: 1] 13 1177244 KiB/s nodeID: storageA [ID: 1] 14 1186608 KiB/s nodeID: storageA [ID: 1] 15 1186889 KiB/s nodeID: storageA [ID: 1] 16 1179729 KiB/s nodeID: storageA [ID: 1] 17 1102887 KiB/s nodeID: storageB [ID: 2] 18 1123221 KiB/s nodeID: storageB [ID: 2] 19 1117767 KiB/s nodeID: storageB [ID: 2] 20 709551 KiB/s nodeID: storageB [ID: 2] 21 706013 KiB/s nodeID: storageB [ID: 2] 22 716241 KiB/s nodeID: storageB [ID: 2] 23 708539 KiB/s nodeID: storageB [ID: 2] 24 705435 KiB/s nodeID: storageB [ID: 2] 25 784060 KiB/s nodeID: storageB [ID: 2] 26 779695 KiB/s nodeID: storageB [ID: 2] 27 789206 KiB/s nodeID: storageB [ID: 2] 28 787650 KiB/s nodeID: storageB [ID: 2] 29 786551 KiB/s nodeID: storageB [ID: 2] 30 791030 KiB/s nodeID: storageB [ID: 2] 31 782739 KiB/s nodeID: storageB [ID: 2] 32 783166 KiB/s nodeID: storageB [ID: 2] |
From the output we can infer that the theoretical maximum read performance that can be achieved is 30.88 GB/s. For storage bench results with varying thread counts, see Further storage bench results.
The generated files will not be automatically deleted when the benchmark is completed and are not visible to users. The files can be deleted using the following command:
# beegfs-ctl --storagebench --alltargets –cleanup