Home > Workload Solutions > High Performance Computing > White Papers > Dell Validated Design for Government HPC, Artificial Intelligence, and Data Analytics > System networks
Most HPC clusters are configured with two networks: management networks and high speed, low latency networks. The management network is connected to every server in the cluster and used for provisioning, managing, and monitoring. Usually, this network is Gigabit Ethernet. This network can also be used to provide iDRAC access using the Shared LOM or dedicated iDRAC ports depending on the server. The Dell PowerSwitch N3248TE-ON Ethernet switch is the recommended starting point and can be stacked for larger clusters.
High speed, low latency fabrics are the communication backbone of HPC clusters. Network traffic generated by the workload applications for communication or storage flows over this fabric. A Cornelis Omni-Path Express fabric is recommended here. For most systems, the fabric is built with 48-port Cornelis Omni-Path Express Edge Switches. Exact switch count depends on blocking factor and cluster size. There are director switches available for the largest systems.