Most HPC systems are configured with two networks:
- An administration network—The administration network is typically Gigabit Ethernet that connects to the onboard LOM/NDC of every server in the cluster. This network is used for provisioning, management, and administration. On the compute servers, this network is also used for BMC management. For infrastructure and storage servers, the iDRAC Enterprise ports can be connected to this network for OOB server management. The management network typically uses the Dell PowerSwitch N3248TE-ON Ethernet switch. If there is more than one switch in the system, multiple switches can be stacked with 10-Gigabit Ethernet cables.
- A high-speed/low-latency switched fabric—A high-speed/low-latency fabric is recommended for clusters with more than four servers. The current recommendation is an HDR InfiniBand fabric. The fabric is typically assembled using NVIDIA QM8790 40-port HDR InfiniBand switches. The number of switches required depends on the size of the cluster and the blocking ratio of the fabric.