The network architecture employs a Virtual Link Trunking (VLT) connection between the two ToR switches. In a VLT environment, all paths are active, adding immediate value and throughput while still protecting against hardware failures. Redundancy of a non-VLT environment requires standby equipment, which drives up infrastructure costs and increases complexity.
VLT technology allows a server or bridge to uplink a physical trunk into more than one Dell EMC PowerSwitch S5248F-ON switch by treating the uplink as one logical trunk. A VLT-connected pair of switches acts as a single switch to a connecting bridge or server. Both links from the bridge network can actively forward and receive traffic. VLT provides a replacement for Spanning Tree Protocol (STP)-based networks by providing both redundancy and full bandwidth utilization using multiple active paths. VLT technology provides:
The Dell EMC PowerSwitch S5248F-ON switches each provide six 100 GbE uplink ports. The following figure shows the VLT interconnect (VLTi) configuration in this architecture:
Figure 6. PowerSwitch S5248F-ON VLTi configuration
The configuration uses two 100 GbE ports from each ToR switch to provide a 200 Gb data path between the switches. The remaining four 100 GbE ports allow for high-speed connectivity to spine switches or directly to the data center core network infrastructure. They can also be used to extend connectivity to other racks.
You can scale out the Ready Stack by adding multiple compute nodes (pods) in the data center. You can use the Dell EMC PowerSwitch Z9264F-ON switch to create a simple yet scalable network, as shown in the following figure:
Figure 7. Multiple compute pods scaled out using leaf-spine architecture
The Z9264F-ON switches serve as the spine switches in the leaf-spine architecture. The Z9264F-ON is a multi-rate switch that supports 10/25/40/50/100 GbE connectivity and can aggregate multiple racks with little or no oversubscription.
When connecting multiple racks, by using the 40/100 GbE uplinks from the rack you can build a large fabric that supports multi-terabit clusters. The density of the Z9264F-ON enables flattening the network tiers and creating an equal-cost fabric from any point to any other point in the network.
For large domain layer-2 requirements, you can use extended multi-domain VLT on the Z9264F-ON, as shown in the following figure. The VLT pair that is formed can scale in terms of hundreds of servers inside multiple racks. Each rack has six 100 GbE links to the core network (two are used for VLT), providing enough bandwidth for all the traffic between each rack.
Figure 8. Multiple compute pods scaled out using multi-domain VLT
The Isilon arrays are configured for connection to two different networks—the external or client-facing network and the internal or back-end network, as shown in Figure 1.
The external or client-facing network uses 10 GbE fiber to connect the Isilon nodes to the Isilon cluster’s external network PowerSwitch S5248F-ON switches. The internal or back-end network connects Isilon nodes to the Isilon cluster's internal network so that the Isilon nodes can communicate with each other. With Isilon OneFS 9.0.x, back-end communication between Isilon cluster nodes includes the option to use a pair of redundant 10 GbE or 40 GbE switches. The Isilon H500 uses 40 GbE fiber connections to the Z9100-ON switches. With the Z9100-ON switches, you can add Isilon nodes to support rapidly growing unstructured data needs for a true scale-out NAS solution.
The compute cluster consists of Dell EMC PowerEdge rack servers. The compute and management rack servers have two 10/25 GbE connections to S5248F-ON switches through one Mellanox ConnectX-5 LX dual-port 10/25 GbE network card.
Customers can achieve bandwidth prioritization for different traffic classes such as host management, vSphere vMotion, NFS, and the VM network using vSphere Distributed Switch (VDS). VDS can be configured, managed, and monitored from within vCenter and provides:
The following figure shows the VDS configuration for the management and compute servers:
Figure 9. VDS configuration