A VxRail cluster depends on adjacent Ethernet switches, commonly referred to as ‘top-of-rack’ switches, to support cluster operations. VxRail is broadly compatible with most Ethernet switches on the market. For best results, select a switch platform that meets the operational and performance criteria for your planned use cases.
The VxRail product does not have a backplane, so the adjacent ‘top-of-rack’ switch enables all connectivity between the nodes that comprise a VxRail cluster. All the networks (management, storage, virtual machine movement, guest networks) configured within the VxRail cluster depend on the ‘top-of-rack’ switches for physical network transport between the nodes, and upstream to data center services and end-users.
The network traffic configured in a VxRail cluster is Layer 2. VxRail is architected to enable efficiency with the physical ‘top-of-rack’ switches through the assignment of virtual LANs (VLANs) to individual VxRail Layer 2 networks in the cluster. This functionality will ease network administration and integration with the upstream network.
One specific network, known as the ‘VxRail internal management network’, depends on multicasting services on the ‘top-of-rack’ switches for node discovery and cluster deployment purposes. Through the VLAN assignment, the flooding of Layer 2 multicast traffic is limited only to the interfaces that belong to that VLAN, except for the interface that is the source of the multicast traffic.
A common Ethernet switch feature, Multicast Listener Discovery (MLD) snooping and querier, is designed to constrain the flooding of multicast traffic by examining MLD messages and then forwarding multicast traffic only to interested interfaces. Since the traffic on this node discovery network is already constrained through the configuration of this VLAN on the ports supporting the VxRail cluster, this setting may provide some incremental efficiency benefits, but does not negatively impact network efficiency.
If your data center networking policy has restrictions for the multicast protocol, IP addresses can be manually assigned to the VxRail nodes as an alternative to automatic discovery.
The switch does not need to support Layer 3 services or be licensed for Layer 3 services.
A VxRail cluster can be deployed in a ‘flat’ network using the default VLAN on the switch, or be configured so that all the management, storage, and guest networks are segmented by virtual LANs for efficient operations. For best results, especially in a production environment, only managed switches should be deployed. A VxRail cluster that is built on a ‘flat’ network should be considered only for test cases or for temporary usage.
In certain instances, additional switch features and functionality are necessary to support specific use cases or requirements.
Decide if you plan to use one or two switches for the VxRail cluster. One switch is acceptable, and is often used in test and development environments. To support sustained performance, high availability, and failover in production environments, two or more switches are required.
VxRail is a software-defined data center which depends on the physical top-of-rack switching for network communications, and is engineered to enable full redundancy and failure protection across the cluster. For customer environments that require protection from a single point of failure, the adjacent network supporting the VxRail cluster must also be designed and configured to eliminate any single point of failure. A minimum of two switches should be deployed to support high availability and balance the workload on the VxRail cluster, linked with a pair of cables to support the flow of Layer 2 traffic between the switches.
Consideration should also be given for link aggregation to enable load balancing and failure protection at the port level. NIC teaming, which is the pairing of a set of physical ports into a logical port for this purpose, is supported in VxRail versions 7.0.130 and later. These logical port pairings can peer with a pair of ports on the adjacent switches to enable the load balancing of demanding VxRail networks.
Figure 4. Multi-chassis link aggregation across two switches
Support for Link Aggregation Control Protocol (LACP) at the cluster level is also introduced in VxRail version 7.0.130. The switches supporting the VxRail cluster should support LACP for better manageability and broad load-balancing options.