Home > Workload Solutions > High Performance Computing > White Papers > Dell Validated Design for Government HPC, Artificial Intelligence, and Data Analytics: AI Inferencing Option > System networks
Most HPC clusters are configured with two types of networks: management networks and high speed, low latency networks. The management network is connected to every server in the cluster and used for provisioning, deploying jobs, managing, and monitoring. Usually, this network is 1 Gigabit Ethernet or faster. This network can also be used to provide iDRAC access using the Shared LOM or dedicated iDRAC ports depending on the server. The Dell PowerSwitch N3248TE-ON Ethernet switch is the recommended starting point and can be stacked for larger clusters.
High speed, low latency fabrics are the communication backbone of HPC clusters. Network traffic generated by the workload applications for communication or storage flows over this fabric. An NDR NVIDIA InfiniBand fabric is recommended for this use case. For most systems, the fabric is built with NVIDIA QM9790 64-port NDR InfiniBand switches. Exact switch count depends on blocking factor and cluster size. There are director switches available for the largest systems.