Home > Workload Solutions > High Performance Computing > White Papers > Dell Validated Design for Government HPC, Artificial Intelligence and Data Analytics: Mellanox Option > System networks
Most HPC clusters are configured with two networks: management networks and high speed, low latency networks. The management network is connected to every server in the cluster and used for provisioning, managing, and monitoring. Usually, this network is Gigabit Ethernet. This network can also be used to provide iDRAC access using the Shared LOM or dedicated iDRAC ports depending on the server. The Dell PowerSwitch N3248TE-ON Ethernet switch is the recommended starting point and can be stacked for larger clusters.
High speed, low latency fabrics are the communication backbone of HPC clusters. Network traffic generated by the workload applications for communication or storage flows over this fabric. A NDR NVIDIA InfiniBand fabric is recommended for this use case. For most systems, the fabric is built with NVIDIA QM9790 64-port NDR InfiniBand switches. Exact switch count depends on blocking factor and cluster size.