Home > Workload Solutions > High Performance Computing > White Papers > Dell Validated Design for HPC Digital Manufacturing with Altair Simulation Suite and 3rd Generation Intel Xeon > System networks
Most HPC systems are configured with two networks—an administration network and a high-speed/low-latency data fabric. The administration network is typically Gigabit Ethernet that connects to the onboard LOM/NDC of every server in the cluster. This network is used for provisioning, management, and administration. On the compute servers, this network is also used for BMC management. For infrastructure and storage servers, the iDRAC Enterprise ports may be connected to this network for OOB server management. The management network typically uses the Dell PowerSwitch N3248TE-ON Ethernet switch. If there is more than one switch in the system, multiple switches can be stacked with 10 Gigabit Ethernet cables.
A high-speed/low-latency fabric is recommended for clusters with more than four servers. The current recommendation is an HDR InfiniBand fabric. The is typically assembled using NVIDIA QM8790 40-port HDR InfiniBand switches. The number of switches required depends on the size of the cluster and the blocking ratio of the fabric.