Home > Workload Solutions > High Performance Computing > White Papers > Dell Technologies Validated Design for Genomics with NVIDIA Clara Parabricks On AMD-powered Dell PowerEdge > System networks
Most HPC systems are configured with two networks—an administration network and a high-speed, low-latency switched fabric. The administration network is typically Gigabit Ethernet that connects to the onboard Lights-out Management (LOM) or Network Daughter Card (NDC) of every server in the cluster. This network is used for provisioning, management, and administration. On the compute servers, this network is also used for Baseboard Management Controller (BMC) management. For infrastructure and storage servers, the iDRAC Enterprise ports may be connected to this network for Out-of-Band (OOB) server management. The management network typically uses the Dell Networking S3048-ON Ethernet switch. If there is more than one switch in the system, multiple switches can be stacked with 10-Gigabit Ethernet cables.
A high-speed, low-latency fabric is recommended for clusters with more than four servers. The current recommendation is a High Dynamic Range (HDR) InfiniBand fabric. The fabric is typically assembled using NVIDIA QM8790 40-port HDR InfiniBand switches. The number of switches that are required depends on the size of the cluster and the blocking ratio of the fabric.