Home > Workload Solutions > High Performance Computing > White Papers > Dell Validated Design for Government > System networks
Most HPC clusters are configured with two networks: management and high speed, low latency networks. The management network is connected to every system in the cluster and used for provisioning, managing, and monitoring. Usually, this network is Gigabit Ethernet. This network can also be used to provide iDRAC access using the Shared LOM or dedicated iDRAC ports depending on the server. The Dell PowerSwitch N3248TE-ON Ethernet switch is the recommended starting point and can be stacked for larger clusters.
High speed, low latency fabrics are the communication backbone of HPC clusters. Network traffic generated by the workload applications for communication or storage flows over this fabric. An HDR NVIDIA InfiniBand fabric is recommended for this use case. For most systems, the fabric is built with NVIDIA QM8790 40-port HDR InfiniBand switches. Exact switch count depends on blocking factor and cluster size. There are director switches available for the largest systems.