Home > Storage > PowerScale (Isilon) > Industry Solutions and Verticals > Analytics > Deep Learning with Dell EMC Isilon > Hardware configuration
The hardware comprises of a cluster with a head node, compute nodes, Isilon storage, and networks. The head node roles can include deploying the cluster of compute nodes, managing the compute nodes, user logins and access, providing a compilation environment, and job submissions to compute nodes. The compute nodes are the work horse and execute the submitted jobs. Software from Bright Computing called Bright Cluster Manager is used to deploy and manage the whole cluster.
Figure 4 shows the high-level overview of the cluster which includes one PowerEdge 740 head node, n PowerEdge C4140 compute nodes, each with 4 NVIDIA Tesla V100 GPU’s, 1 chassis of Isilon F800 storage, and two networks. All compute nodes are interconnected through an InfiniBand switch. All compute nodes and the head node are also connected to a 1 Gigabit Ethernet management switch which is used by Bright Cluster Manager to administer the cluster. An Isilon storage solution is dual connected to the FDR-40GigE Gateway switch so that it can be accessed by the head node and all compute nodes.
Horovod, a distributed TensorFlow framework, was used to scale the training across multiple compute nodes.
Refer to Appendix B: Benchmark setup and execution for further configuration details.