Home > Workload Solutions > High Performance Computing > White Papers > Dell Validated Design for Technical Computing Core > System networks
Most TC clusters are equipped with two networks: management networks and high-speed, low-latency networks. The management network, typically Gigabit Ethernet, is connected to every server in the cluster and is used for provisioning, managing, and monitoring. This network can also facilitate iDRAC access using either the Shared LOM or dedicated iDRAC ports, depending on the server configuration. The Dell PowerSwitch N3248TE-ON Ethernet switch is recommended as the starting point for these networks and can be stacked to accommodate larger clusters.
High-speed, low-latency fabrics serve as the communication backbone of TC Core clusters, handling network traffic generated by workload applications for communication and storage. For these purposes, an NDR NVIDIA InfiniBand fabric is recommended. Most systems use NVIDIA QM9790 64-port NDR InfiniBand switches to build this fabric, with the exact number of switches required depending on the blocking factor and the size of the cluster.