Home > Workload Solutions > High Performance Computing > White Papers > Dell Validated Design for HPC Life Science with 4th Gen Intel Xeon Scalable Processors and Dell PowerScale > System networks
Most HPC systems are configured with two networks—an administration network, and a high-speed, low-latency switched fabric. The administration network is typically Gigabit Ethernet that connects to the onboard LOM of every server in the cluster. This network is used for provisioning, management, and administration. On the compute servers, this network is also used for BMC management. For infrastructure and storage servers, the iDRAC Enterprise ports may be connected to this network for OOB server management. The management network typically uses the Dell PowerSwitch N3248TE-ON Ethernet switch. If there is more than one Ethernet switch in the system, multiple switches can be stacked with 10-gigabit Ethernet cables.
A high-speed, low-latency fabric is recommended for clusters with more than four servers. The current recommendation is NDR InfiniBand fabric. The fabric is typically assembled using NVIDIA QM9790 64-port NDR InfiniBand switches. The number of switches required depends on the size of the cluster and the blocking ratio of the fabric.