A VxRail cluster depends on adjacent Ethernet switches, commonly referred to as “top-of-rack” (ToR) switches, to support cluster operations. VxRail is broadly compatible with most Ethernet switches on the market. For best results, select a switch platform that meets the operational and performance criteria for your planned use cases.
The VxRail product does not have a backplane, so the adjacent ToR switch enables all connectivity between the nodes that comprise a VxRail cluster. All the networks (management, storage, virtual machine movement, guest networks) configured within the VxRail cluster depend on the ToR switches for physical network transport between the nodes, and connectivity upstream to data center services and end-users.
The network traffic configured in a VxRail cluster is Layer 2. VxRail is architected to enable efficiency with the physical ToR switches through the assignment of virtual LANs (VLANs) to individual VxRail Layer 2 networks in the cluster. This functionality eases network administration and integration with the upstream network.
The VxRail product has two separate and distinct management networks. One management network extends externally to connect to IT administration and external data center services. The second management network is isolated, visible only to the VxRail nodes.
Figure 4. VxRail management networks
The network that is visible only to the VxRail nodes depends on IPv6 multicasting services configured on the adjacent ToR switches for node discovery purposes. One node is automatically designated as the primary node. It acts as the source, and listens for packets from the other nodes using multicast. A VLAN assignment on this network limits the multicast traffic only to the interfaces connected to this internal management network.
A common Ethernet switch feature, Multicast Listener Discovery (MLD) snooping and querier is designed to further constrain the flooding of multicast traffic by examining MLD messages, and then forwarding multicast traffic only to interested interfaces. Since the traffic on this node discovery network is already constrained through the configuration of this VLAN on the ports supporting the VxRail cluster, this setting may provide some incremental efficiency benefits, but does not negatively impact network efficiency.
If your data center networking policy has restrictions for the IPV6 multicast protocol, IP addresses can be manually assigned to the VxRail nodes as an alternative to automatic discovery.
The Ethernet switch does not need to support Layer 3 services or be licensed for Layer 3 services. You can enable routing services further upstream on the network infrastructure or enable routing services at this ToR switch.
A VxRail cluster can be deployed in a “flat” network using the default VLAN on the switch. It can also be configured so that all the management, storage, and guest networks are segmented by virtual LANs for efficient operations. For best results, especially in a production environment, only managed switches should be deployed, and VLANs should be used. A VxRail cluster that is built on a “flat” network should be considered only for test cases or for temporary usage.
In certain instances, additional switch features and functionality are necessary to support specific use cases or requirements.
There may be additional feature considerations to account for when selecting Ethernet switches for your VxRail cluster, depending on interoperability requirements for different types of storage resources.
Decide if you plan to use one or two switches for the VxRail cluster. One switch is acceptable, and is often used in test and development environments. To support sustained performance, high availability, and failover in production environments, two or more switches are required.
VxRail is a software-defined data center which depends on the physical top-of-rack switching for network communications, and is engineered to enable full redundancy and failure protection across the cluster. For customer environments that require protection from a single point of failure, the adjacent network supporting the VxRail cluster must also be designed and configured to eliminate any single point of failure. A minimum of two switches should be deployed to support high availability and balance the workload on the VxRail cluster. They should be linked with a pair of cables to support the flow of Layer 2 traffic between the switches.
Consideration should also be given for link aggregation to enable load balancing and failure protection at the port level. NIC teaming, which is the pairing of a set of physical ports into a logical port for this purpose, is supported in VxRail versions 7.0.130 and later. These logical port pairings can peer with a pair of ports on the adjacent switches to enable the load balancing of demanding VxRail networks.
Figure 5. Multichassis link aggregation across two switches
Support for Link Aggregation Control Protocol (LACP) at the cluster level is also introduced in VxRail version 7.0.130. The switches supporting the VxRail cluster should support LACP for better manageability and broader load balancing options.