A VxRail cluster depends on adjacent Ethernet switches, commonly referred to as ‘top-of-rack’ switches, to support cluster operations. VxRail is broadly compatible with most Ethernet switches on the market. For best results, select a switch platform that meets the operational and performance criteria for your planned use cases.
The VxRail product does not have a backplane, so the adjacent ‘top-of-rack’ switch enables all connectivity between the nodes that comprise a VxRail cluster. All the networks (management, storage, virtual machine movement, guest networks) configured within the VxRail cluster depend on the ‘top-of-rack’ switches for physical network transport between the nodes, and upstream to data center services and end-users.
The network traffic that is configured in a VxRail cluster is Layer 2. VxRail is architected to enable efficiency with the physical ‘top-of-rack’ switches through the assignment of virtual LANs (VLANs) to individual VxRail Layer 2 networks in the cluster. This functionality will ease network administration and integration with the upstream network.
One specific network, known as the ‘VxRail internal management network’, depends on multicasting services on the ‘top-of-rack’ switches for node discovery and cluster deployment purposes. Through the VLAN assignment, the flooding of Layer 2 multicast traffic is limited only to the interfaces that belong to that VLAN, except for the interface that is the source of the multicast traffic.
A common Ethernet switch feature, Multicast Listener Discovery (MLD) snooping and querier, is designed to constrain the flooding of multicast traffic by examining MLD messages and then forwarding multicast traffic only to interested interfaces. Since the traffic on this node discovery network is already constrained through the configuration of this VLAN on the ports supporting the VxRail cluster, this setting may provide some incremental efficiency benefits, but does not negatively impact network efficiency.
The switch does not need to support Layer 3 services or be licensed for Layer 3 services.
A VxRail cluster can be deployed in a ‘flat’ network using the default VLAN on the switch, or be configured so that all the management, storage, and guest networks are segmented by virtual LANs for efficient operations. For best results, especially in a production environment, only managed switches should be deployed. A VxRail cluster that is built on a ‘flat’ network should be considered only for test cases or for temporary usage.
In certain instances, additional switch features and functionality are necessary to support specific use cases or requirements.
If your plans include deploying all-flash storage on your VxRail cluster, 10 GbE network switches are the minimum requirement for this feature. Dell Technologies recommends a 25 GbE network if that is supported in your data center infrastructure.
Enabling advanced features on the switches planned for the VxRail cluster, such as Layer 3 routing services, can cause resource contention and consume switch buffer space. To avoid resource contention, select switches with sufficient resources and buffer capacity.
Switches that support higher port speeds are designed with higher Network Processor Unit (NPU) buffers. An NPU shared switch buffer of at least 16 MB is recommended for 10 GbE network connectivity, and an NPU buffer of at least 32 MB is recommended for more demanding 25 GbE network connectivity.
For very large VxRail clusters with demanding performance requirements and advanced switch services that are enabled, consider switches with additional resource capacity and deeper buffer capacity.