Home > Workload Solutions > High Performance Computing > White Papers > HPC High-Performance Storage Solution for BeeGFS > NetBench benchmark
NetBench mode is intended for network throughput benchmarking. When NetBench mode is enabled, data is not actually written to storage devices. Instead, write requests sent over the network get discarded by the storage servers and do not get submitted to the underlying filesystem. Similarly, in case of a read request, instead of reading from the underlying file system on the servers, only memory buffers are sent to the clients. Consequently, NetBench mode is independent of the underlying disks and can be used to test the maximum network throughput between the clients and the storage servers.
Before starting the benchmarking using IOzone, the NetBench tool was used to benchmark the solution’s overall network performance. Enable NetBench mode on clients as shown below:
"echo 1 > /proc/fs/beegfs/<client ID>/NetBench_mode"
Note: When you have multiple BeeGFS file systems mounted on a client, make sure that the NetBench mode is enabled only on the appropriate file system of relevance. The following command can be used to identify the client node ID for a given management node. beegfs-ctl --listnodes --nodetype=client --sysMgmtdHost=10.10.218.200
Provided below is the partial output of the NetBench benchmark results for the large configuration of the BeeGFS High-Capacity Storage Solution with 4x PowerVault Storage Arrays.
Iozone: Performance Test of File I/O Version $Revision: 3.492 $ Compiled for 64 bit mode. Build: linux-AMD64
Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins Al Slater, Scott Rhine, Mike Wisner, Ken Goss Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR, Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner, Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone, Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root, Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer, Vangel Bojaxhi, Ben England, Vikentsi Lapa, Alexey Skidanov, Sudhir Kumar.
Run began: Fri Dec 2 16:20:57 2022
Include close in write timing Include fsync in write timing Record Size 1024 kB File size set to 7812500 kB No retest option selected Network distribution mode enabled. Command line used: /home/brendan/iozone_builder/iozone3_492/src/current/iozone -i 0 -c -e -r 1m -s 7812500 -t 1024 -+n -+m /home/brendan/iozone_builder/machinefile Output is in kBytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 kBytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. Throughput test with 1024 processes Each process writes a 7812500 kByte file in 1024 kByte records
Test running: Children see throughput for 1024 initial writers = 47684492.22 kB/sec Min throughput per process = 42955.05 kB/sec Max throughput per process = 48262.39 kB/sec Avg throughput per process = 46566.89 kB/sec Min xfer = 6951936.00 kB
Test cleanup: |
The actual theoretical network performance from this solution which has two HDR InfiniBand adapters is 50 GB/s, and the maximum achievable performance from the existing network infrastructure is 47.7 GB/s.